Everything you need to know about Concurrent React (with a little bit of Suspense)

And why it's a game changer

Table of Contents

Intro

UI’s are composed of many different "parts", and each part responds to user interaction at a different rate.

Some parts, like input fields in a form, are fast and respond almost instantaneously to user interaction, while others, like a very long filtered list, or navigating between pages are slow and may take a while to respond.

With synchronous rendering, which is how React without concurrent features (and all other JS UI libs/frameworks) operates, there are cases where the slow parts of the UI drag down the fast parts by blocking their execution and thus, degrading their responsivity.

React’s concurrent renderer decouples the fast parts from the slow parts by allowing us to render the slow parts in the background without blocking the fast parts, so that each part can respond to user interaction at its own pace.

So, while concurrent rendering in React won’t make your application faster, it’ll make it feel faster by making the UI more responsive.

In this post, we’ll explore React Concurrent in-depth, understand what problems it solves, how it works and how to leverage it through the use of concurrent features.

The Problem

Picture this:

You’re writing a component that renders a filtered list and this filtering takes place on the client (filtering doesn’t cause extra calls to the server), also, let’s say that for whatever reason, rendering the filtered list is a CPU-intensive task, so it takes several milliseconds to render.

In this setting, we have two main UI elements, a text input whose value will be used to filter the list and the list itself.

To illustrate the situation, here’s an example of what it could look like:

Live Demohttps://stackblitz.com/edit/react-xsgxkg?file=src%2FApp.js

import React, { useState } from "react";
import { list } from "./list";
import "./style.css";

export default function App() {
  const [filter, setFilter] = useState("");

  return (
    <div className="container">
      <input value={filter} onChange={(e) => setFilter(e.target.value)} />

      <List filter={filter} />
    </div>
  );
}

const List = ({ filter }) => {
  const filteredList = list.filter((entry) =>
    entry.name.toLowerCase().includes(filter.toLowerCase())
  );

  sleep(100);

  return (
    <ul>
      {filteredList.map((item) => (
        <li key={item.id}>
          {item.name} - ${item.price}
        </li>
      ))}
    </ul>
  );
};

const sleep = (ms) => {
  const start = performance.now();

  while (performance.now() - start < ms);
};

In this example, <List /> has been artificially slowed down with the sleep function, which synchronously blocks the main thread, to simulate a CPU-intensive task.

Sidenote: Instead of using a huge list to simulate a heavy CPU load, we’ll use this sleep function, this way we can easily simulate different workloads while also providing a more consistent experience for readers, that will run the demos with very different hardware specs.

Notice how when we start filtering the list, the whole UI freezes for a moment until it can process everything and then it "jumps" to its final state.

This is what it looks like without being artificially slowed down.

Ideally, we’d find a way to optimize this component to make it render faster, but there’s only so much we can do in terms of optimization, and sometimes even with these optimizations in place, the render will still be unacceptably slow.

However, in those cases, we’d like to keep the fast parts of the UI responsive, even though other parts might be slow.

In our example, it’s only <List /> that is slow, so why should the whole UI suffer and become unresponsive because of a single slow component? Couldn’t we keep the input responsive despite <List /> being slow?

And the culprit is…

Synchronous Rendering

Without concurrent features (i.e. without using startTransition, useTransition or useDeferredValue), React renders components synchronously, which means that once it starts rendering, nothing, short of an exception can interrupt it, so it will only take on any other task after it has finished rendering.

In practice, this means that no matter how long the render takes, any new events that occur during that render will only be handled after it finishes.

Here’s a neat experiment that illustrates that:

Live Demohttps://stackblitz.com/edit/react-slj4mv?file=src%2FApp.js

Sidenote: You can open the console in the live preview by clicking the tab at the bottom

export default function App() {
  const [value, setValue] = useState("");
  const [key, setKey] = useState(Math.random());

  return (
    <div className="container">
      <input
        value={value}
        onChange={(e) => {
          console.log(
            `%c Input changed! -> "${e.target.value}"`,
            "color: yellow;"
          );
          setValue(e.target.value);
        }}
      />

      {/* 
        By using a random number as the key to <Slow />,
        each time we click on this button we rerender it,
        even though it is memoized.
      */}
      <button onClick={() => setKey(Math.random())}>Render Slow</button>

      <Slow key={key} />
    </div>
  );
}

/**
 * We're memoizing this component so that changes
 * to the input do not make this component rerender.
 */
const Slow = memo(() => {
  sleep(2000);

  console.log("%c Slow rendered!", "color: teal;");

  return <></>;
});

In this experiment, we have a controlled input, a <Slow /> component that takes 2 seconds to render and a button that forces <Slow /> to rerender.

<Slow /> takes no props and is memoized, so input changes won’t make <Slow /> rerender, however, it is keyed with key and every time we click the button we set it to a different random number, which is why clicking it forces <Slow /> to rerender.

We start interacting with the experiment by writing "Hello" in the input and as we can see, each time the input changes, we log the change to the console.

Then we click the button, which makes <Slow /> rerender, and while React is rendering, we write "World" in the input.

Despite being able to interact with the browser while JS is busy rendering React components, these interactions do not interrupt the render. This is made evident by the fact that even though <Slow /> only logs its "Slow rendered!" message to the console after the call to sleep, and our interactions with the input happen mostly before sleep is concluded, all logs that result from interactions with this input appear after the "Slow rendered!" log.

This is why slow components end up blocking the fast parts of the UI, since once we start rendering a slow component, we can only address new updates to other parts of the UI after we finish rendering the slow component.

The Solution

To solve this problem, React 18 introduces concurrent rendering, which I’ll explain below.

We start with updates.

In the context of concurrent rendering, an update is anything that causes a rerender, so, for instance, anytime we call setState with a new value, this will cause an update.

const Component = () => {
  const [count, setCount] = useState(0);

  // This is an update because it will cause a rerender
  const handleIncrement = () => {
    setCount((count) => count + 1);
  };

  // This is NOT an update because as we're passing the same value
  // to `setCount`, it won't cause a rerender
  // (Actually, for some reason the first time we call it,
  // it WILL cause a rerender, but subsequent calls won't)
  const handleSame = () => {
    setCount(count);
  };

  return (
    <>
      <button onClick={handleIncrement}>Increment</button>
      <button onClick={handleSame}>Same</button>
    </>
  );
};

Then, we divide updates into two categories:

  1. High Priority (Urgent) Updates
  2. Low Priority (Non-Urgent) Updates

High priority updates are caused by calls to setState, useReducer‘s dispatch or when using useSyncExternalStore (and the slice of the store we’re subscribed to updates), and trigger a high priority render, which is just the usual synchronous render we’re used to, so once a high priority renders starts, it won’t stop until it has finished.

Additionally, it’s important to mention that when the app renders for the first time, as a result of a call to ReactDOM.createRoot, this first render is always a high priority render.

const Component = () => {
  const [filter, setFilter] = useState("");

  const handleInputChanged = (e) => {
    // Causes a high priority update
    setFilter(e.target.value);
  };

  return (
    <>
      <input value={filter} onChange={handleInputChanged} />
    </>
  );
};

Low priority updates are caused by calls to startTransition or useDeferredValue, and trigger a low priority render, that only starts to run after high priority renders are finished and that will be interrupted by any high priority updates.

const Component = () => {
  const [filter, setFilter] = useState("");
  const [delayedFilter, setDelayedFilter] = useState("");
  const [isPending, startTransition] = useTransition();

  const handleInputChanged = (e) => {
    // Causes a high priority update
    setFilter(e.target.value);

    startTransition(() => {
      // Causes a low priority update
      setDelayedFilter(e.target.value);
    });
  };

  return (
    <>
      <input value={filter} onChange={handleInputChanged} />
    </>
  );
};

When a low priority rerender is interrupted, it will wait for the high priority rerender that interrupted it to finish and then it will start over again from the beginning.

An important property of updates, is that they’re batched together with other updates of the same priority, so all high priority updates that occur in the same call stack will be batched together and will cause a single high priority rerender. The same thing will happen for low priority updates.

const Component = () => {
  const [filter, setFilter] = useState("");
  const [otherFilter, setOtherFilter] = useState("");
  const [delayedFilter, setDelayedFilter] = useState("");
  const [delayedOtherFilter, setDelayedOtherFilter] = useState("");
  const [isPending, startTransition] = useTransition();

  const handleInputChanged = (e) => {
    // These two will be batched and cause
    // a SINGLE high priority update
    setFilter(e.target.value);
    setOtherFilter(e.target.value.toUpperCase());

    startTransition(() => {
      // These two will be batched and cause
      // a SINGLE low priority update
      setDelayedFilter(e.target.value);
      setDelayedOtherFilter(e.target.value.toUpperCase());
    });
  };

  return (
    <>
      <input value={filter} onChange={handleInputChanged} />
    </>
  );
};

Here is a diagram that illustrates this process:

Then, to keep the fast parts of the UI responsive while having other parts that are slow, we make updates to the fast parts high priority and updates to the slow parts low priority.

This way, whenever a fast part has to update when a slow part is rendering in the background, as the update to the fast part is high priority, it’s going to interrupt rendering the slow part to keep the fast part responsive, and then, when the fast part finishes rendering, it’ll go back to rendering the slow part.

Let’s see how this works in practice.

Concurrent List Filtering

Here’s the same list filtering example we saw before, but now we’re using concurrent features to decouple the fast parts (input) from the slow parts (list filtering) and keep the fast parts responsive.

Live Demohttps://stackblitz.com/edit/react-gqqvon?file=src%2FApp.js

export default function App() {
  // This state will be updated by
  // HIGH priority updates
  const [filter, setFilter] = useState("");
  // This state will be updated by
  // LOW priority updates
  const [delayedFilter, setDelayedFilter] = useState("");
  const [isPending, startTransition] = useTransition();

  // Ignore this for now, it's just
  // a hook to help us debug concurrent
  // features, later I'll explain how it works
  useDebug({ filter, delayedFilter });

  return (
    <div className="container">
      <input
        value={filter}
        onChange={(e) => {
          setFilter(e.target.value);
          startTransition(() => {
            // Here we're triggering the low
            // priority update that will
            // change `delayedFilter`'s value
            setDelayedFilter(e.target.value);
          });
        }}
      />

      <List filter={delayedFilter} />
    </div>
  );
}

// Notice we're memoing List now
const List = memo(({ filter }) => {
  const filteredList = list.filter((entry) =>
    entry.name.toLowerCase().includes(filter.toLowerCase())
  );

  sleep(100);

  return (
    <ul>
      {filteredList.map((item) => (
        <li key={item.id}>
          {item.name} - ${item.price}
        </li>
      ))}
    </ul>
  );
});

There is a lot to unpack here, so let’s start by examining the changes we made to the code.

First, we created a new delayedFilter state that we’re passing to <List /> and that will be updated by low priority updates.

Second, when the user interacts with the input, we trigger both a high priority update that modifies value and a low priority update (by calling setDelayedFilter inside startTransition) that modifiesdelayedFilter.

Lastly, we’ve memoized <List />, so now, instead of rerendering everytime its parent (<App /> in this case) rerenders, it only rerenders when the props it receives are different from the ones it received the last time it rerendered.

Now let’s analyze how it behaves.

When <App /> renders for the first time, this first render is a high priority render, because it was triggered by ReactDOM.createRoot, therefore both filter and delayedFilter have the same empty string value.

Then we start interacting with the input, by writing "tasty".

For each keystroke, both a high priority update and a low priority update are triggered.

High priority updates are always processed before low priority updates, so, after the first keystroke, a high priority rerender starts, and in this rerender, only the modification contained in the high priority update takes effect, hence, filter has value "t", but delayedFilter remains unaltered, with value "".

In this sense, note that until the low priority rerender that rerenders <List /> is completed, <List /> will be stale.

The fact that delayedFilter does not get modified in the high priority update is a key factor in keeping the input (fast part of the UI) responsive, because as <List /> is memoized, and it receives the same delayedFilter as in the previous render, it does not rerender.

Once this high priority rerender finishes, then it commits to the VDOM and carries on with the lifecycle (insertion effects -> modifies DOM -> layout -> layout effects -> paint -> effects).

After that, it start the low priority rerender, in which filter has value "t" (because the high priority changed it) and delayedFilter also has value "t", which comes from the low priority update.

During this low priority rerender, we type the next letter, "a", which causes yet another two updates, a high priority one and a low priority one.

Because a high priority update was triggered, the low priority rerender is interrupted, and then we start the high priority rerender.

In this high priority rerender, filter has the new value "ta", but delayedFilter still has value "", what gives?

Modifications to the state caused by low priority updates will only be commited when a low priority render finishes, and in the meanwhile, high priority updates will "see" these values unmodified, because their modifications were never commited.

We may think of low priority rerenders as a kind of "draft" rerender, in the sense that we throw it away as soon as it is interrupted by some high priority update.

As in this high priority rerender, delayedFilter still has the same value as before, <List />, which is slow, is not rerendered.

When the high priority rerender finishes, we go back to the low priority render, and the cycle continues until we stop interacting with the input.

Then there will be no more high priority updates to interrupt low priority ones, and finally the latest low priority render will be able to finish.

What we just described can be seen in the gif above by reading the console.

Also, you’ll notice that sometimes we start high priority rerenders without first starting and the interrupting a low priority rerender, and this happens because we typed so quickly that even before it could start the low priority rerender, there was already a high priority update scheduled.

This whole process might be a little bit confusing at first, but there’s a very suitable analogy that can help us grok these concepts.

Concurrent Rendering ~ Branching Workflow

Let’s say we’re developing an application and we’re using Git to track its codebase.

There’s a main branch, that represents the code that is in production.

When we want to write a new feature, we create a feature branch off main, like feature/awesome-feature, and when we’re done working on it, we merge it back to main.

And when there’s some critical issue in production, we create a hotfix off main, like hotfix/fix-nasty-bug, and when we’re done with it, we also merge it back to main.

Now, what happens when we’re working on some feature, and suddenly we have to ship some hotfix?

Shipping the hotfix is more urgent than delivering the feature, so (supposing we’re the only dev working on the project) we have to interrupt our work on the feature and start working on the hotfix until we complete and merge it.

Only after we ship the hotfix is that we can go back to working on the feature, but as our feature was based on main before we merged the hotfix, to continue working on the feature, we need to first "catch up" with main either by merging or rebasing main with our feature branch.

In this example, after we finished the hotfix, we could go back to the feature work long enough to finish it, but it could very well be the case that our feature work got interrupted again due to some other bug in production that needed a hotfix.

If you’ve been paying attention, you probably noticed that features are similar to low priority updates, as working on the feature branch, which is analogous to low priority rendering, may be interrupted at any time by the need to ship a hotfix, which in its turn equates to high priority updates.

Also, after finishing a hotfix, before we can go back to our feature work, we need to pull the changes introduced by the hotfix to the feature branch, which (here the analogy breaks down a little bit) is roughly equivalent to the fact that when we resume working on a low priority render, we need to start it from scratch to incorporate the modifications made high priority updates.

Now that we have a good understanding of React’s concurrent rendering, we’ll explore concurrent features in depth.

Concurent Features

At the time of publication, there are only two concurrent features, that is, only two ways of "activating" concurrent rendering, and they are transitions (useTransition or the standalone startTransition) and deferred values (useDeferredValue).

Both these features work according to the same principle: they let us mark certain updates as low priority, and then, as we’ve seen previously, the concurrent renderer takes care of the rest.

Transitions (useTransition)

Transitions API gives us an imperative way of marking some update as low priority, by exposing a startTransition that receives a function as argument such that any state updates that occur within this function are marked as a low priority update.

We’ve already seen transitions being used to deal with the slow filtered list, but it’s worth to take another look at it:

Live Demohttps://stackblitz.com/edit/react-bucnxz?file=src%2FApp.js

export default function App() {
  // This state will be updated by
  // HIGH priority updates
  const [filter, setFilter] = useState("");
  // This state will be updated by
  // LOW priority updates
  const [delayedFilter, setDelayedFilter] = useState("");
  const [isPending, startTransition] = useTransition();

  // Ignore this for now, it's just
  // a hook to help us debug concurrent
  // features, later I'll explain how it works
  useDebug({ filter, delayedFilter });

  return (
    <div className="container">
      <input
        value={filter}
        onChange={(e) => {
          setFilter(e.target.value);
          startTransition(() => {
            // Here we're triggering the low
            // priority update that will
            // change `delayedFilter`'s value
            setDelayedFilter(e.target.value);
          });
        }}
      />

      {/*
       * We can use isPending to signal
       * that we have a transition pending
       */}

      {isPending && "Recalculating..."}

      <List filter={delayedFilter} />
    </div>
  );
}

// Notice we're memoing List now
const List = memo(({ filter }) => {
  const filteredList = list.filter((entry) =>
    entry.name.toLowerCase().includes(filter.toLowerCase())
  );

  sleep(100);

  return (
    <ul>
      {filteredList.map((item) => (
        <li key={item.id}>
          {item.name} - ${item.price}
        </li>
      ))}
    </ul>
  );
});

The only difference between this example and the previous one we saw, is that we’re using isPending to show a feedback to the user that we’re "recalculating".

Detailed Behavior

There are a few behaviors of the Transitions API that are worth to take note:

startTransition ALWAYS triggers both a high priority and a low priority update

Calling startTransition, even if we don’t trigger any updates inside of it, will still trigger a high priority update (yes, you read it correctly, a high priority update) and a low priority update.

To be honest I don’t really understand what is the purpose of this behavior, but regardless of that it’s a piece of information that may come in handy when debugging concurrent rendering.

Here’s an example:

Live Demohttps://stackblitz.com/edit/react-tamorr?file=src%2FApp.js

export default function App() {
  const [isPending, startTransition] = useTransition();

  useDebug();

  return (
    <button
      onClick={() => {
        startTransition(() => {});
      }}
    >
      Start Transition
    </button>
  );
}

startTransition‘s callback runs IMMEDIATELY

The function we pass as an argument to startTransition runs immediatelly and synchronously.

This is important for debugging purposes, but also as indicative that we should not do expensive work inside the callback, otherwise we will block the render.

It’s okay (and somewhat expected) for the updates triggered within startTransition‘s callback expensive rerenders, as these will be run in the background, but it’s not okay for this expensive work to take place within the callback itself.

Live Demohttps://stackblitz.com/edit/react-dppjz4?file=src%2FApp.js

export default function App() {
  const [isPending, startTransition] = useTransition();

  useDebug();

  return (
    <button
      onClick={() => {
        console.log("Clicked!");
        startTransition(() => {
          console.log("Callback ran!");
        });
      }}
    >
      Start Transition
    </button>
  );
}

State updates inside startTransition‘s callback MUST be in the same call stack as the callback itself

For state updates to be marked as low priority, they must be called in the same call stack within startTransition‘s callback, otherwise it doesn’t work.

In practice, this means that these won’t work as expected:

startTransition(() => {
  setTimeout(() => {
    // By the time setTimeout's callback is called
    // we're already in another call stack
    // This will be marked as a high priority update
    // instead
    setCount((count) => count + 1);
  }, 1000);
});
startTransition(async () => {
  await asyncWork();

  // Here we're inside a different call stack
  setCount((count) => count + 1);
});
startTransition(() => {
  asyncWork().then(() => {
    // Different call stack
    setCount((count) => count + 1);
  });
});

If we need to use these constructs, this is what we should do instead:

setTimeout(() => {
  startTransition(() => {
    setCount((count) => count + 1);
  });
}, 1000);
await asyncWork();

startTransition(() => {
  setCount((count) => count + 1);
});
asyncWork().then(() => {
  startTransition(() => {
    setCount((count) => count + 1);
  });
});
ALL transitions are batched in a SINGLE rerender

This is actually a property of low priority updates, as we’ve mentioned earlier, where multiple low priority updates are batched together in a single rerender.

However, I believe that it’s worth to reiterate this in the context of transitions.

When there’s a low priority update pending, if other low priority updates are triggered before the first low priority update could be handled, all low priority updates will be carried out at once in the same rerender.

Also, currently there’s a all or nothing policy in place, in the sense that, even when these low priority updates rerender parts of the component tree that are completely unrelated (e.g. sibling components), and one part has already finished rerendering, interruptions will make the rerender start from the very beginning.

Live Demohttps://stackblitz.com/edit/react-qiifbk?file=src%2FApp.js,src%2Fstyle.css

export default function App() {
  return (
    <div
      style={{
        display: "flex",
      }}
    >
      <Component name="First" />
      <Component name="Second" />
    </div>
  );
}

export const Component = ({ name }) => {
  const [filter, setFilter] = useState("");
  const [delayedFilter, setDelayedFilter] = useState("");
  const [isPending, startTransition] = useTransition();

  useDebug({ name });

  return (
    <div className="container">
      <input
        value={filter}
        onChange={(e) => {
          setFilter(e.target.value);
          startTransition(() => {
            setDelayedFilter(e.target.value);
          });
        }}
      />
      {isPending && "Recalculating..."}

      <List filter={delayedFilter} />
    </div>
  );
};

const List = memo(({ filter }) => {
  const filteredList = list.filter((entry) =>
    entry.name.toLowerCase().includes(filter.toLowerCase())
  );

  sleep(500);

  return (
    <ul>
      {filteredList.map((item) => (
        <li key={item.id}>
          {item.name} - ${item.price}
        </li>
      ))}
    </ul>
  );
});

Transitions only work for state, but not for refs

Even though you can do pretty much anything inside startTransition‘s callback, including modify a ref, the only way to mark updates as low priority is by calling setState.

Here’s an example that showcases this:

Live Demohttps://stackblitz.com/edit/react-m341q9?file=src%2FApp.js

export default function App() {
  const [filter, setFilter] = useState("");
  const delayedFilterRef = useRef(filter);
  const [isPending, startTransition] = useTransition();

  const delayedFilter = delayedFilterRef.current;

  useDebug({ filter, delayedFilter });

  return (
    <div className="container">
      <input
        value={filter}
        onChange={(e) => {
          setFilter(e.target.value);
          startTransition(() => {
            delayedFilterRef.current = e.target.value;
          });
        }}
      />
      {isPending && "Recalculating..."}

      <List filter={delayedFilter} />
    </div>
  );
}

const List = memo(({ filter }) => {
  const filteredList = list.filter((entry) =>
    entry.name.toLowerCase().includes(filter.toLowerCase())
  );

  sleep(100);

  return (
    <ul>
      {filteredList.map((item) => (
        <li key={item.id}>
          {item.name} - ${item.price}
        </li>
      ))}
    </ul>
  );
});

Notice how filter and delayedFilter are always in sync even in high priority rerenders.

Even if you can modify refs inside startTransition‘s callback, for the ref, it will behave the same as if you had modified it without using a transition.

Deferred Values (useDeferredValue)

In contrast to Transitions API that works imperatively, useDeferredValue works in a declarative manner.

useDeferredValue returns a state that is the result of low priority updates, and is set to the value that we pass as an argument to it.

Low priority updates that update the aforementioned state are triggered when the current value passed to useDeferredValue differs from the one it received previously.

Before we go into more details, let’s use useDeferredValue to solve the filtered list problem once again:

Live Demohttps://stackblitz.com/edit/react-czrkqj?file=src%2FApp.js

export default function App() {
  // This state will be updated by
  // HIGH priority updates
  const [filter, setFilter] = useState("");
  // This state will be updated by
  // LOW priority updates
  const deferredFilter = useDeferredValue(filter);

  useDebug({ filter, deferredFilter });

  return (
    <div className="container">
      <input
        value={filter}
        onChange={(e) => {
          setFilter(e.target.value);
        }}
      />

      <List filter={deferredFilter} />
    </div>
  );
}

const List = memo(({ filter }) => {
  const filteredList = list.filter((entry) =>
    entry.name.toLowerCase().includes(filter.toLowerCase())
  );

  sleep(100);

  return (
    <ul>
      {filteredList.map((item) => (
        <li key={item.id}>
          {item.name} - ${item.price}
        </li>
      ))}
    </ul>
  );
});

The first time the application gets rendered, it always triggers a high priority render, as we already saw, and as useDeferredValue is being called for the first time, it won’t trigger a low priority update and it just returns the value it is initialized with.

So, in the very first render, both filter and deferredFilter have the same value, which is "".

After the first user interaction (by typing "t" into the input), a high priority update is triggered due to setFilter being called.

In this first high priority rerender, filter has value "t", and therefore useDeferredValue is called with "t", but it will keep returning the previous value ("" in this case) until its corresponding low priority rerender is finished.

Then, upon the completion of the first high priority rerender, a low priority rerender that was triggered because we passed a different value to useDeferredValue starts.

Inside low priority rerenders, the return value of useDeferredValue will always be up to date and set to the last value useDeferredValue received, which in practice is the value it receives during the low priority render.

From this point forwards, it works analogously to the Transitions API example, which you can verify by looking at the logs, that are pretty much the same.

Detailed Behavior

Passing a new value to useDeferredValue during a low priority update WILL NOT trigger another update

As I said before, useDeferredValue, during low priority rerenders, will always return the last value it received, even if the value it is receiving during a low priority rerender is different from the value that was received in the preceding high priority rerender and thus, the value that caused the low priority update in the first place.

Because of that, there is no need for another low priority update, even if useDeferredValue receives a new value during a low priority update, as this newly received value will just "pass through" the hook and be returned immediately, and thus, the most update to date value will be available to the rerender.

Live Demohttps://stackblitz.com/edit/react-vgrsmu?file=src%2FApp.js

export default function App() {
  const [filter, setFilter] = useState("");
  const [delayedFilter, setDelayedFilter] = useState("");
  const [isPending, startTransition] = useTransition();
  // useDeferredValue will receive different values
  // during the high priority and the low priority
  // rerenders
  const deferredFilter = useDeferredValue(
    isPending ? filter.toUpperCase() : delayedFilter
  );

  useDebug({ filter, delayedFilter, deferredFilter });

  return (
    <div className="container">
      <input
        value={filter}
        onChange={(e) => {
          setFilter(e.target.value);
          startTransition(() => {
            setDelayedFilter(e.target.value);
          });
        }}
      />

      <List filter={deferredFilter} />
    </div>
  );
}

const List = memo(({ filter }) => {
  const filteredList = list.filter((entry) =>
    entry.name.toLowerCase().includes(filter.toLowerCase())
  );

  sleep(100);

  return (
    <ul>
      {filteredList.map((item) => (
        <li key={item.id}>
          {item.name} - ${item.price}
        </li>
      ))}
    </ul>
  );
});

Notice how even though we’re passing different values to useDeferredValue during high and low priority rerenders (because isPending is only true during a high priority rerender), everything remains pretty much the same, because the value it receives during the low priority rerender is what matters and no extra rerenders are triggered.

Updates caused by different useDeferredValue calls are ALL batched together in a single rerender<

This is analogous to the fact that all transitions are batched in a single rerender.

So, even if there are multiple calls to useDeferredValue in unrelated parts of the component tree, their updates will all be batched and will be resolved in a single low priority rerender.

The same all or nothing policy that we saw for transitions applies here as well.

Suspense

So far, we’ve explored using concurrent rendering to deal with CPU-bound components, however, we can also use it to handle IO-bound components.

As IO operations can be done asynchronously in Javascript, IO-bound components, even without concurrent rendering, do not pose the same problem as CPU-bound components because IO is non-blocking, and thus when we’re waiting for IO what we usually do is just render a spinner until IO finishes.

We do, however, have other kinds of problems when dealing with IO-bound components, like when using suspense to fetch data.

With suspense, instead of initiating data fetching inside effects, that run only after render, we’re able to start fetching data during the render itself.

Now, while this is very interesting because we don’t need to wait for the render to finish in order to start fetching data, we lose a little bit of control in terms of coordinating loading states while data is loading.

For example, if we want to keep showing stale data while we’re fetching new data instead of showing a loading state, we can achieve it in the following manner with useEffect:

Live Demohttps://stackblitz.com/edit/react-4ffmx7?file=src%2FApp.js

const wait = (ms) => new Promise((resolve) => setTimeout(resolve, ms));

const createEntry = () => ({
  id: Math.random(),
  name: Math.random(),
});

// Simulates data fetching
// Returns a list of 10 entries
const fetchData = async () => {
  await wait(600);
  return Array.from({ length: 10 }).map(() => createEntry());
};

export default function App() {
  const [data, setData] = useState();
  const [page, setPage] = useState(1);

  useEffect(() => {
    // To avoid race conditions
    let ignore = false;

    fetchData(page).then((data) => {
      if (!ignore) {
        setData(data);
      }
    });

    return () => {
      ignore = true;
    };
  }, [page]);

  return (
    <div>
      <div>
        <button onClick={() => setPage((page) => page - 1)}>
          Previous Page
        </button>
        <button onClick={() => setPage((page) => page + 1)}>Next Page</button>
      </div>

      {data
        ? data.map((entry) => <div key={entry.id}>{entry.name}</div>)
        : "Loading ..."}
    </div>
  );
}

When we reload the page and render the component for the first time, we don’t have any data, so we render a loading state.

Then, when changing pages, instead of falling back to a loading state, we use a stale-while-revalidate strategy, where we keep showing old data until new data is available.

However, when using Suspense for data fetching, whenever we try to render a component whose data is not yet available, it suspends and shows a fallback.

If we want to use the same stale-while-revalidate strategy with suspense, we need concurrent features.

The general idea is to use low priority updates to modify state that causes data to be refetched, so that while components will still suspend, they will do so in the background, and in the meanwhile we’ll keep showing stale data in high priority rerenders.

Here are two examples where we use concurrent features to implement stale-while-revalidate with suspense for data fetching, one with transitions and another with deferred values:

Sidenote: We’re omitting suspenseFetchData‘s implementation because currently there is no stable Suspense API for data fetching and I don’t want people doing stuff based on it, but if you REALLY want to see how it’s implemented, it’s in the Live Demo

Live Demohttps://stackblitz.com/edit/react-8aayzv?file=src%2FApp.js

export default function App() {
  const [page, setPage] = useState(1);
  const [delayedPage, setDelayedPage] = useState(page);
  const [isPending, startTransition] = useTransition();

  return (
    <div>
      <div>
        <button
          onClick={() => {
            setPage((page) => page - 1);
            startTransition(() => {
              setDelayedPage((page) => page - 1);
            });
          }}
        >
          Previous Page
        </button>
        <button
          onClick={() => {
            setPage((page) => page + 1);
            startTransition(() => {
              setDelayedPage((page) => page + 1);
            });
          }}
        >
          Next Page
        </button>
        Page: {page}
      </div>

      <Suspense fallback="Loading...">
        <Component page={delayedPage} />
      </Suspense>
    </div>
  );
}

const Component = ({ page }) => {
  const data = suspenseFetchData(page);

  return (
    <>
      {data
        ? data.map((entry) => <div key={entry.id}>{entry.name}</div>)
        : "Loading ..."}
    </>
  );
};

Live Demohttps://stackblitz.com/edit/react-zcfxm9?file=src%2FApp.js

export default function App() {
  const [page, setPage] = useState(1);
  const deferredPage = useDeferredValue(page);

  return (
    <div>
      <div>
        <button
          onClick={() => {
            setPage((page) => page - 1);
          }}
        >
          Previous Page
        </button>
        <button
          onClick={() => {
            setPage((page) => page + 1);
          }}
        >
          Next Page
        </button>
        Page: {page}
      </div>

      <Suspense fallback="Loading...">
        <Component page={deferredPage} />
      </Suspense>
    </div>
  );
}

const Component = ({ page }) => {
  const data = suspenseFetchData(page);

  return (
    <>
      {data
        ? data.map((entry) => <div key={entry.id}>{entry.name}</div>)
        : "Loading ..."}
    </>
  );
};

Additional Considerations

Now that we understand how concurrent rendering in React works, and how we leverage concurrent rendering though the use of concurrent features, there are some additional considerations that I’d like to share with you.

Suspension Points (Preemption)

Every time we have concurrency without parallelism (which is our case, as concurrent rendering takes place in a single thread), there is a need for preemption, that is, in order to switch tasks, we need to stop the task we’re currently doing before switching to another one.

The "places" where we’re interruption is allows, is what I’m calling suspension points (note that "supension" here has little to do with Suspense).

In a multithreaded environment, if we don’t use locks, each line of code is a suspension point, which means that at any time, regardless of what is being executed, the thread can be suspended to allow other thread to execute.

In React, suspension points are located between the render of components.

This is a crucial piece of information, because this means that the render of individual components cannot be interrupted, so once a component starts rendering, it’s going to render until it reaches the return statement.

Only then, before moving to the next component’s render, it will check whether there are high priority updates.

Here’s a demonstration:

Live Demohttps://stackblitz.com/edit/react-xvlsro?file=src%2FApp.js

export default function App() {
  const [filter, setFilter] = useState("");
  const [delayedFilter, setDelayedFilter] = useState("");
  const [isPending, startTransition] = useTransition();

  useDebug({ filter, delayedFilter });

  return (
    <div className="container">
      <input
        value={filter}
        onChange={(e) => {
          setFilter(e.target.value);
          startTransition(() => {
            setDelayedFilter(e.target.value);
          });
        }}
      />
      {isPending && "Recalculating..."}

      <List filter={delayedFilter} />
    </div>
  );
}

const List = memo(({ filter }) => {
  const filteredList = list.filter((entry) =>
    entry.name.toLowerCase().includes(filter.toLowerCase())
  );

  return (
    <ul>
      {filteredList.map((item) => (
        <li key={item.id}>
          {item.name} - ${item.price}
        </li>
      ))}

      {Array.from({ length: 10 }).map((_, index) => (
        <Slow key={index} index={index} />
      ))}
    </ul>
  );
});

const Slow = ({ index }) => {
  console.log(`Before sleep: ${index}`);

  sleep(50);

  console.log(`After sleep: ${index}`);

  return null;
};

This example is the list filtering one with concurrent rendering using transitions, but with a slight modification.

Now, instead of making <List /> itself the slow component, we’re rendering a list of this new <Slow /> component, which as you might have guessed by its name, is where our bottleneck is now.

There are two important things to notice here:

First, depending on how fast we interrupt low priority renders, we have more or less instances of <Slow /> that get to be rendered before the interruption takes place.

Second, console logs from <Slow /> are always in pairs, which means that once we start rendering <Slow />, we cannot interrupt it.

If we could interrupt it, during <Slow />‘s render, there would eventually be an interruption that would occur after the first log entry, but before the second log entry, and thus we would be seeing a unpaired entry.

As a consequence, we should avoid having a CPU-intensive task that occurs in a single component, as opposed of being spread out into multiple components.

If that happens, not even React’s concurrent mode can do much, because once we start rendering this slow component, we won’t be able handle high priority updates until it finishes rendering.

Low Priority and High Priority Updates are Single Tiered

Even though React has lots of internal priorities for concurrent rendering,updates are either high priority or low priority with no in-between.

All high priority updates have the same priority, and the same thing applies for low priority updates.

Which also means that low priority updates, regardless of being triggered by unrelated parts of the component tree, and regardless of whether they came from a transition or a deferred value, they will have the same priority, will be batched together in the same rerender and will follow that "all or nothing" policy we saw before.

See this example below:

Live Demohttps://stackblitz.com/edit/react-fwhrsk?file=src%2FApp.js

export default function App() {
  const [filter, setFilter] = useState("");
  const [delayedFilter, setDelayedFilter] = useState("");
  const deferredFilter = useDeferredValue(filter);
  const [isPending, startTransition] = useTransition();

  useDebug({ filter, delayedFilter, deferredFilter });

  return (
    <div className="container">
      <input
        value={filter}
        onChange={(e) => {
          setFilter(e.target.value);
          startTransition(() => {
            setDelayedFilter(e.target.value);
          });
        }}
      />
      {isPending && "Recalculating..."}

      <List filter={delayedFilter} />
    </div>
  );
}

const List = memo(({ filter }) => {
  const filteredList = list.filter((entry) =>
    entry.name.toLowerCase().includes(filter.toLowerCase())
  );

  sleep(100);

  return (
    <ul>
      {filteredList.map((item) => (
        <li key={item.id}>
          {item.name} - ${item.price}
        </li>
      ))}
    </ul>
  );
});

Notice that delayedFilter and deferredFilter are always in sync, and no extra low priority updates are triggered.

Debugging Concurrent Rendering

Debugging components that use concurrent rendering is tricky because they render twice, once due to the high priority update and another due to the low priority one.

This can make things very confusing when we want to inspect (e.g. log to console) values during render, especially because some values will differ between high and low priority renders.

In this context, a question that arises naturally is how do we know when we’re in a high priority render vs a low priority render, or how do we know when a low priority render has been interrupted.

There are a few indicators that we can use to help us identify that:

The first one comes from looking at useDeferredValue‘s return value.

On useDeferredValue‘s first render (mount), it will return the same value it receives, but after that, if we pass to it a different value from what we passed before, it will only return the new value if we’re in a low priority rerender.

So we could do something like this:

// We create a new reference in every render,
// so probe will always be different from
// the previous render
const probe = {};
const deferredProbe = useDeferredValue(probe);

// If we're not on the first render
const isLowPriority = probe === deferredProbe;

For the component’s first render, there is no easy way to detect whether we’re in a high priority or low priority render, because the component’s first render could have been triggered by a low priority update:

const App = () => {
  const [show, setShow] = useState(false);
  const deferredShow = useDeferredValue(show);

  return (
    <>
      <button onClick={() => setShow(true)}>Show</button>
      {/* 
        It's first render will be triggered 
        by a low priority update to deferredShow 
      */}
      {deferredShow && <Component />}
    </>
  );
};

Second, all effects (useInsertionEffect, useLayoutEffect, useEffect) run only after the render phase, which means that by the time they run, not only the component they’re being called in will have finished rendering, but also it will have been committed, which means that the whole component subtree for which the render was scheduled has finished rerendering.

With these indicators, we can build a hook to help us debug concurrent-enabled components:

const useDebugConcurrent = ({
  onFirstRenderStart,
  onFirstRenderEnd,
  onLowPriorityStart,
  onLowPriorityEnd,
  onHighPriorityStart,
  onHighPriorityEnd,
}) => {
  const probe = {};
  const deferredProbe = useDeferredValue(probe);
  const isFirstRenderRef = useRef(true);
  const isFirstRender = isFirstRenderRef.current;

  const isLowPriority = probe === deferredProbe;

  if (isFirstRender) {
    isFirstRenderRef.current = false;
    onFirstRenderStart?.();
  } else {
    if (isLowPriority) {
      onLowPriorityStart?.();
    } else {
      onHighPriorityStart?.();
    }
  }

  useLayoutEffect(() => {
    if (isFirstRender) {
      onFirstRenderEnd?.();
    } else {
      if (isLowPriority) {
        onLowPriorityEnd?.();
      } else {
        onHighPriorityEnd?.();
      }
    }
  });
};

A variation of the above hook is what I used on previous examples, the difference is that as this hook is a generic version that’s supposed to work consistently in any situation, we don’t have the ability to reliably detect interruptions (other than noticing that there’s a low priority render start that is not matched by a low priority render end) nor to detect the first render’s priority.

Final Considerations

React’s concurrent render takes our ability to create great user experiences to the next level, by dealing gracefully with long standing issues in frontend development.

I’m curious to see where the React team will take concurrent rendering in the future, especially with the possibility of delegating background renders to other threads with the help of Web Workers.

We live in a very exciting time for frontend development.

We want to work with you. Check out our "What We Do" section!