React Performance Primer

In the last few months, I worked with a very large React and Redux application which is actively worked on by a team of around 20 people with mixed levels of experience.

The main form of work distribution was assigning people to screens. I’m not a big fan of this approach because that leads to code repetition and mixed practices within the same codebase (better to break down screens into Components and have people implement Components).

This work distribution, combined with a big number of requirements, delays, and rushes to release led to some poor performance scenarios.

In this post, I will cover some of the more common problems related to React performance, how you can assess them and how to fix them.

Profiling

Even though some performance problems can be noticed simply by visual testing you should always profile your app to really know where the problems lie. Chances are that you will find out that not everything behaves as you expected.

What gets measured gets managed — Peter Drucker

The go-to solution to profile web applications is to use any browser’s Developer Tools. Not only can you debug your application and use watch expressions to know how values evolve over time, but you also have serious firepower in the Profiler tab.

Google has several guides that teach you how to use the Chrome profiler tool in order to analyze the run-time of your application.

Besides the browser profiler, the React DevTools, besides the amazing features they already have, recently introduced the React Profiler.

The React profiler will let you dive deeply into the run-time of your React app. You are able to inspect individually each of the updates React commits to the DOM and find out which interactions were triggered and how much time each Component took to be rendered.

It will also be compatible with upcoming React features such as Suspense. Check out the React docs for more.

Besides profiling, you can use the highlight updates option of the React DevTools or the DomListener extension to find out which parts of your app are being updated in response to certain interactions. You might find out that some parts are being unexpectedly updated simply because they are coupled

Note: Keep in mind that the highlight updates option of the React DevTools will let you know every time the VirtualDom is updated whereas the DomListener extension will only display what React commits to the DOM.

Preventing unnecessary re-renders

React has different lifecycle methods that let you hook up logic at specific times such as when a component mounts or updates. One of these methods is the shouldComponentUpdate which lets you write some logic to determine if the Component should re-render or not.

Note: This chart is subject to change, you should be able to find the most up-to-date version here.

The shouldComponentUpdate method is called every time a component receives new props or changes its state and is one of the primary ways to optimize your React app’s performance.

shouldComponentUpdate receives the next state and props of the Component, which we can use to compare to the current state of the Component and then return a boolean whether we want to re-render or not.

class ProfileSection extends React.Component {
  shouldComponentUpdate(nextProps, nextState) {
    return this.props.id !== nextProps.id && nextState.visible;
  }
  render() {
    // ...
  }
}

PureComponent & React.memo

The React team determined that in most cases people just simply compared if the props or state have changed in order to trigger the re-render. This is why they created React’s PureComponent.

import React, {PureComponent} from 'react';

class ProfileSection extends *PureComponent* {
  render() {
    // will only happen if props and state are different
    //...
  }
}

More recently, the React team released React.memo, which means that you no longer need to refactor your functional Component to a class Component in order to take advantage of this functionality.

import React, {memo} from 'react';

const ProfileSection = props => (
  // ...
);

export default memo(ProfileSection);

PureComponent and React.memo make use of default shouldComponentUpdate method that performs a shallow comparison of props (and state, in the case of PureComponent). The keyword here is shallow because there are a lot of ways we can break this optimization if we don’t keep this in mind.

A good way of ensuring you’re not breaking the optimization is to always use primitive types as props and state when possible. This is because when we compare non-primitive types such as Array or Object we are not traversing the object and comparing values (that would be a deep comparison, not shallow) but rather comparing references.

This means the if there’s a place somewhere in the Component update path where we create a new Object or Array which will be part of the state or passed as a prop then we are triggering unnecessary re-renders.

Important Note: People assume that if they make all of their Component’s PureComponents then they will immediately optimize their app. This isn’t always the case and Ryan Florence wrote why. The TL;DR is that when we use PureComponent we are incurring in the cost of an extra diff. React already performs an element diff to determine what changed in the Component tree. By using PureComponent we also perform a diff of state and props, which can be costly when certain Component re-renders a lot (meaning it’s always getting new props/changing state). If your Component is updated a lot it might not benefit from being a PureComponent. Please benchmark before prematurely optimizing.

Binding Functions

Whenever we are working with React we need to bind event handlers to the Component otherwise we will lose the context in which they execute meaning that we can’t call methods such as setState from within the event handler (which is something very common). Check this if you want to read more on why do we need to bind functions.

Binding in Constructor vs Arrow Functions

With the release of ES2015, we are now able to use Arrow functions. Arrow functions behave like normal functions except that they keep the surrounding context. This means that they will keep the same reference to the lexical this instead of creating a new one (since regular Javascript functions create a new context).

This means that if we are using class functions we need to explicitly bind them in the constructor

class Button extends React.Component {
  constructor(props) {
    super(props);
    this.handleClick = this.handleClick.bind(this); // explicit bind
  }

  handleClick() {
    console.log(this.props);
  }

  render() {
    return (
      <button onClick={this.handleClick}>
        Click Me
      </button>
    );
  }
}

Or make use of Arrow functions as class properties

class Button extends React.Component {
  handleClick = () => {
    console.log(this.props);
  }

  render() {
    return (
      <button onClick={this.handleClick}>
        Click Me
      </button>
    );
  }
}

Most people prefer the latter because it removes the need of writing a constructor most of the time (especially since with modern features we can write state = {} which was the other common need of having a constructor), and it also has a more concise syntax.

However, there’s a difference between them. Regular functions are added to the Component’s prototype while Arrow functions become an instance property. It’s important to know the distinction because if you have a Component that’s created hundreds or thousands of times then each one of the instances will have its own methods rather than re-using the one in the prototype chain.

This may or may not be a problem and, once again, you won’t know until you measure it. Read more here for the distinctions between the two and also more nuances regarding the surrounding ecosystem.

Avoid binding/inlining when rendering

In some scenarios binding functions in render can be inoffensive but for others, it might lead to undesired side-effects

class Button extends React.Component {
  handleClick() {
    console.log(this.props);
  }

  render() {
    return (
      <button onClick={this.handleClick.bind(this)}>
        Click Me
      </button>
    );
  }
}

The same can happen when we inline functions

function Button() {
  return (
    <button onClick={() => console.log(this.props)}>
      Click Me
    </button>
  );
}

It’s important to know that two functions in Javascript are never equal even if their body is equal (you can check if they have the same body if you compare using the toString function).

This means that if you perform some kind of inlining or binding in render you are always triggering a re-render. Now, this is (probably) fine for simple Components, but in a Component that has a deep tree of children, this can be a problem. It is even worse if the Component receiving the function is a PureComponent. This will trigger a re-render every time because the Component is always receiving a new prop, effectively resulting in worse performance because it will check each time if the props are different and they will always be.

Not using (proper) keys

This is a pretty basic rule but I decided that it needed a particular mention because I found a lot of these

// eslint-disable-line react/no-*array*-index-key

React needs a key prop when rendering lists so that it knows which elements have been added, removed or changed. This improves the rendering performance of the list because if something changes in the absence of keys React will re-render the entire list and you can imagine the cost it bears when rendering a big list.

Using indexes as keys is also a bad practice because then your list becomes dependent on the order of the elements, which leads to bugs if you attempt to sort the list or add new elements to it. This is not a direct performance bottleneck but it may have undesired side-effects, especially if the order of the elements of the list is not consistent.

When building generic components it’s easy to rely on the index because you have no knowledge of the data the component will receive. In these scenarios the component API can/should accept a prop that identifies the field that can be used as a key, that should be a unique identifier (e.g., email, username).

Check the following example from Robin Pokorny to view this problem in action

Debouncing & Throttling

Good UX sometimes requires that we analyze each keystroke of the user. A common use case is validating a form field according to some ruleset or rendering a filtered list in an auto-complete input field.

Debouncing or throttling come in handy in these situations because we can avoid triggering a specific action too many times. If our auto-complete field makes a costly request in order to get new suggestions based on what the user wrote, we don’t want to spam requests. We might want to wait for a little and only perform the request once every few ms or when the user stops typing.

It’s also important to know the distinction between throttling and debouncing.

Throttling delays the number of times a function is executed. Reduces the number of calls to one in a specified interval of time. throttle(func, 500) will execute func only every 500ms.

Debouncing is a little tricker. Debouncing a function will trigger the original function after a specified amount of time once the debounced function stops being called.

If we declared the following const debounced = debounce(func, 200) , we will only call func if 200ms have passed from the last call of debounced. This is useful in where we want to respond to some event but only after the interaction has stopped. A common use case is notification grouping. Instead of sending 5 notifications, apps will let some time pass until the notification stream has ended to inform the user he has new X notifications.

With these techniques, we can maintain performance and responsiveness of the application with relatively low effort. Response times around 100ms make the application responsiveness feel instantaneous which means that anything faster doesn’t have a concrete impact in user experience. This means that we can debounce/throttle functions for approximately 100ms and the app will keep the fast and responsive feel.

Also, most implementations of debounce and throttle have a leading option, meaning that you can trigger the original function right away and throttle/debounce subsequent calls. This can be applied to the auto-complete example where we want one request to be made as soon as the user starts typing to show some results.

To conclude this subject, make sure that you don’t debounce or throttle your change event handlers otherwise you might lose access to the event object. As Dan Abramov recommends, throttle/debounce the extra work you need to perform based on the user input but keep the change handler synchronous as to not harm the user experience.

If you’re curious about implementation, lodash’s throttle and debounce sources are a common reference. Also, if you want to check an explanation on how to debounce and throttle an auto-complete field in React check this article by Peter Bengtsson.

Split Big Components

Component driven development promotes encapsulation and single responsibility, but this is something easy to overlook if you don’t build with scale in mind or you’re pressured by deadlines. The last codebase I worked with had Components that were authentic monsters (1000+ lines of code). Connect the screen directly to your store (ours was Redux) and you have a recipe for a performance disaster.

React re-renders a Component whenever it receives new props or its state changes. This means that Components which are frequently updated should be encapsulated so that their updates don’t cause other Components to render unnecessarily.

This is also important when connecting Components to the application’s store. You don’t need to hook up all the Redux logic (connect, mapState/DispatchToProps) in one single place.

The following code is a modification of redux’s todos example. We have a Component that renders the list of todos and displays the possible filters (active, completed, all). This is a pretty basic example of why we should split component’s by their responsibility since each time we add a new todo or change the state of a todo to active/completed, we will re-render not only the list of todos but also the filters below.

Simply refactoring and moving the filter to their own Component (Footer in the original example) will prevent this Component to re-render each time we modify the todo list.

const TodoScreen = ({ todos, toggleTodo }) => (
  <div>
    <ul>
      {todos.map(todo =>
        <Todo
          key={todo.id}
          {...todo}
          onClick={() => toggleTodo(todo.id)}
        />
      )}
    </ul>
    {/* The code below should be moved to its own Component */}
    <div>
      <span>Show: </span>
      <FilterLink filter={VisibilityFilters.SHOW_ALL}>
        All
      </FilterLink>
      <FilterLink filter={VisibilityFilters.SHOW_ACTIVE}>
        Active
      </FilterLink>
      <FilterLink filter={VisibilityFilters.SHOW_COMPLETED}>
        Completed
      </FilterLink>
    </div>
  </div>
);

Ohans Emmanuel wrote a detailed piece about this (and other) issues.

Lazy Loading and Code Splitting

Lazy Loading is the concept of initializing objects (or equivalent) only when you need it, rather than doing it as the program starts (eager loading).

A common use case for Lazy Loading is any type of feed. Feeds (such as the ones you find on the news and social media sites) have virtually no end, therefore there’s no point where to stop loading data. You could use pagination or you could lazy load more data as the user scrolls through the page. By avoiding loading a huge amount of data at once you keep your app interactive and performant.

Only a fraction of your React components will load data and only a fraction of those will benefit from Lazy Loading approaches. A much more common scenario in web applications involves loading different modules for different parts of the application, this is referred to as Code Splitting.

Code Splitting is commonly available in most Javascript module bundlers by using dynamic import statements (example taken from the React docs)

import("./math").then(math => {
  console.log(math.add(16, 26));
});

This lets the bundler know that it should create a separate file for the math module that you can now lazy load. This improves performance because instead of loading one big Javascript bundle the first time you open your app, you will download smaller bundles along the way.

By using the new React.lazy API, we can lazily load Components just when we need them (example taken from the React docs)

const OtherComponent = React.lazy(() => import('./OtherComponent'));

function MyComponent() {
  return (
    <div>
      <OtherComponent />
    </div>
  );
}

It is also possible to use dynamic imports to prefetch or preload modules. You should preload modules when a resource is likely to be used in the current page and prefetch when the resource is likely to be used in future navigations.

Read the React docs about Code Splitting and Addy Osmani’s article about prefecth/preload for more information.

Optimize for Production

If you’re building a web app, chances are that you are using a module bundler such as webpack, parcel or rollup. All of these bundlers offer a production build setting which will strip out your code of unnecessary development code and probably perform minification and optimizations.

You always want to ship the least amount of code possible so that your users have to download fewer bytes in order to run your app.

Read the Cost of Javascript article by Addy Osmani to figure out why you should aggressively reduce the size of your Javascript, especially if you want a seamless experience on mobile.


All of these topics are related to React performance but not all of them have the same level of importance as swyx mentioned. Please take this into consideration. For example, memoization (PureComponent/React.memo) and decoupling components can be easy wins whereas changing how you bind functions will only bring benefits in specific scenarios.

The React team does a great job in terms of providing good APIs and making its users use the correct approach naturally. As with everything, you can still fall into problems the bigger and more complex your application gets and when that happens you can refer back to the topics in this post and know what to do.

Never forget to measure before you optimize anything, always base your performance improvement efforts on metrics so that you are aware of the impact of your work.

I would also like to mention that this post wouldn’t exist if the React/Javascript community wasn’t so awesome in terms of sharing knowledge. I have linked several articles throughout this post but I will also link some more sources on the subject

Discuss on Twitter 💬