Benjamin Read's code garden.

A React performance case study

Published on

This article is about: javascriptreact


Part of the way through a project to replace an app with a static site generated by Gatsbyjs, initial tests revealed that we there was a lot we could do to improve the performance. Here’s how we turned the project around, cutting load times to under 3 seconds and reducing percieved performance bottlenecks.

Here’s what the initial test looked like in web page test (webpagetest.org):

Load time is 4.73 seconds, very slow

Identifying the Problem #

The initial issue we had was how do we identify what problems we had that we needed to solve. Without a clear understanding of what was going wrong, we had no way of focusing on tasks that could help us improve the current situation.

One of the first things we did was to find out how our site was rendering across different browsers. Straight away, we noticed that browers that didn’t have the benefit of a fast JavaScript parser were significantly slower. Sometimes it would take up to about 14 seconds to render a page. Interacting with key conversion-linked elements on the page was janky or stuttered a lot, which further put users off.

We also had a popup module for cookies which took over a large part of the screen, and which couldn’t be dismissed until all of the JS was downloaded and parsed. This was a major issue to overcome.

Over later iterations we also implemented a few other strategies to help us pinpoint issues:

Webpagetest #

Web page test has got better & better over the years. It provides a teriffic overview as well as a deep dive into the waterfall and has tonnes of settings for different locations and devices.

It’s still my favourite performance tool.

Lighthouse #

I use the Lighthouse plugin for Google Chrome (instead of the native version) because it contains newer features that we wanted to incorporate into our results. Lighthouse is typically better for pinpointing different overall issues such as PWA status and accessibility.

HotJar #

I was skeptical about using HotJar to begin with. I felt it would add to our JavaScript burden instead of reduce it. However, we managed to integrate HotJar with our analytics tracking script, reducing the overall load. The script was loaded asynchronously which helped. However, downloading and parsing still took up processing power, reducing the interactivity of the site until it was completed. So reducing the overall JS burden was still a necessity.

HotJar later became instrumental in helping us identify a severe issue which resulted in a white screen, so it became really useful in our testing and iterating process.

Splunk #

Splunk allows you to collect errors in your application and log them for later use offsite. I particularly love Splunk, because it allowed us to send any JavaScript errors we wanted that occured on the client off to be examined and quantified later. Once we had implemented Splunk (using React’s ErrorBoundary API) we could see what was really going on with our app in the wild.

Having set up these two, we were ready to dive into prioritising and fixing some of the significant issues we were seeing.

Issue 1: Large JavaScript Burden #

Even though GatsbyJS does a huge amount to reduce the amount of JavaScript (rendering what it can on the server, tree shaking and minification to name a few) We had 2 issues with our JavaScript:

1. Pulling Too Much onto the Frontend #

GatsbyJS’ compiler looks for code that includes references to the windowobject and renders those on the client. The rest, it renders on the server. We had a lot of extra JavaScript, including page configuration objects, that ran on the frontend instead of using the Gatsbyjs’ data layer.

By refactoring back into gatsby, we were able to reduce our bundle size.

We also noticed that somehow larger libraries like Lodash were used to render client-side content. By refactoring this library out of what was rendered on the client side, we further streamlined it.

2. Using WebPack 3 #

We moved the site from GatsbyJS v1 to v2 (it took a day to complete). this led to a reduction in JS bundle size from about 700kb to around 530kb.

Issue 2: JavaScript Caching #

Using Webpack 3 was a huge step forward. Previously, in Gatsby v1, we could cache HTML really well. But page components couldn’t be cached. Webpack 3 allowed us to cache JS more aggresively, leading to much better performance.

After the work with Webpack we saw the following results:

Load time is 3.99 seconds, still slow but showing improvement

As I mentioned above, we noticed that on mid-range phones where users didn’t use a fast JavaScript parser, it could take up to 14s for the site to be interactive. Users could still get some functionality and scroll around … however, they were prevented from doing so by the cookie module. This was a particular issue on smaller screens but affected everyone.

To combat the issue we redesigned the module to be much smaller. We also set a timeout on the module so that it wouldn’t try to check for cookies or even render until after the dom was loaded.

As a result, we didn’t stop people from using the site until the cookie module was dismissed.

Issue 4: Large Dataset #

On a high-converting page we made a call on the frontend to get a large dataset that was over 1.3MB. This was a stop-gap: we ultimately needed to get this dataset not from a static JSON file but from a server that could take 35ms to compile the data we needed and make it available as a JSON object.

Instead of querying this API directly we built an intermediary service that cached the data set we needed as a static JSON file. The API also reduced the data down by eliminating extra data that we didn’t use.

This reduced our file to around 650KB.

Issue 5: Long-running API Calls #

The data we requested on the API endpoint mentioned above was mission-critical. It had to be available to our users, otherwise conversions would not be possible. Therefore we hadn’t set a timeout on the API call.

We were able to find a workaround to this issue by caching the data in the project on build. That allowed us to set a timeout so that if the connection dropped suddenly, a default data set that was fetched on build earlier could be displayed.

Issue 6: Large Number of JavaScript Math Functions #

Having an intermediary service allowed us to further reduce the JavaScript on the site.

The large dataset contained raw numbers down do several decimal places. But we only needed to provide 2 decimal places to visitors. Although it reduced the adaptability of the intermediary service we had built, we decided to perform rounding there instead. This freed up much more processing power for the main thread and further reduced times.

Issue 7: Avoiding Re-rendering #

I’m going to cover this more in-depth in another post, but we also managed to reduce page re-renders massively. We did this using the Context API to share data across several modules via State. Most of our functions were used in State as well. This meant that only the components consuming that data re-rendered when they were used.

Conclusion #

A recent performance test shows we have achieved the following:

Load time is an acceptable 1.78 seconds

I recognise these performance results are under optimal conditions and actual, not to mention percieved, results aren’t captured by these figures. But we also need some achievable goals that help us to stay motivated and focused.

We haven’t finished yet either … more work will continue to be done during the lifetime of this project.

This has been a great adventure, and it’s been exciting to see how we’ve gone to fully-loaded times of around 10s to 4, then down to 2.5, where it currently stands. I hope this post gives you some clues about how you can use new tricks and old to make your apps more performant.

Part of the way through a project to replace an app with a static site generated by Gatsbyjs, initial tests revealed that we there was a lot we could do to improve the performance. Here’s how we turned the project around, cutting load times to under 3 seconds and reducing percieved performance bottlenecks.

Here’s what the initial test looked like in web page test (webpagetest.org):

Doc-CompleteFully—LoadedLoad timeFirst ByteStart RenderSpeed IndexFirst InteractiveTimeRequestsBytes InTimeRequestsBytes InCostFirst view4.733s0.901s2.1002266> 8.179s4.733s211,536kb8.055s232.691kb$$$$

Identifying the Problem #

The initial issue we had was how do we identify what problems we had that we needed to solve. Without a clear understanding of what was going wrong, we had no way of focusing on tasks that could help us improve the current situation.

One of the first things we did was to find out how our site was rendering across different browsers. Straight away, we noticed that browers that didn’t have the benefit of a fast JavaScript parser were significantly slower. Sometimes it would take up to about 14 seconds to render a page. Interacting with key conversion-linked elements on the page was janky or stuttered a lot, which further put users off.

We also had a popup module for cookies which took over a large part of the screen, and which couldn’t be dismissed until all of the JS was downloaded and parsed. This was a major issue to overcome.

Over later iterations we also implemented a few other strategies to help us pinpoint issues:

Webpagetest #

Web page test has got better & better over the years. It provides a teriffic overview as well as a deep dive into the waterfall and has tonnes of settings for different locations and devices.

It’s still my favourite performance tool.

Lighthouse #

I use the Lighthouse plugin for Google Chrome (instead of the native version) because it contains newer features that we wanted to incorporate into our results. Lighthouse is typically better for pinpointing different overall issues such as PWA status and accessibility.

HotJar #

I was skeptical about using HotJar to begin with. I felt it would add to our JavaScript burden instead of reduce it. However, we managed to integrate HotJar with our analytics tracking script, reducing the overall load. The script was loaded asynchronously which helped. However, downloading and parsing still took up processing power, reducing the interactivity of the site until it was completed. So reducing the overall JS burden was still a necessity.

HotJar later became instrumental in helping us identify a severe issue which resulted in a white screen, so it became really useful in our testing and iterating process.

Splunk #

Splunk allows you to collect errors in your application and log them for later use offsite. I particularly love Splunk, because it allowed us to send any JavaScript errors we wanted that occured on the client off to be examined and quantified later. Once we had implemented Splunk (using React’s ErrorBoundary API) we could see what was really going on with our app in the wild.

Having set up these two, we were ready to dive into prioritising and fixing some of the significant issues we were seeing.

Issue 1: Large JavaScript Burden #

Even though GatsbyJS does a huge amount to reduce the amount of JavaScript (rendering what it can on the server, tree shaking and minification to name a few) We had 2 issues with our JavaScript:

1. Pulling Too Much onto the Frontend #

GatsbyJS’ compiler looks for code that includes references to the windowobject and renders those on the client. The rest, it renders on the server. We had a lot of extra JavaScript, including page configuration objects, that ran on the frontend instead of using the Gatsbyjs’ data layer.

By refactoring back into gatsby, we were able to reduce our bundle size.

We also noticed that somehow larger libraries like Lodash were used to render client-side content. By refactoring this library out of what was rendered on the client side, we further streamlined it.

2. Using WebPack 3 #

We moved the site from GatsbyJS v1 to v2 (it took a day to complete). this led to a reduction in JS bundle size from about 700kb to around 530kb.

Issue 2: JavaScript Caching #

Using Webpack 3 was a huge step forward. Previously, in Gatsby v1, we could cache HTML really well. But page components couldn’t be cached. Webpack 3 allowed us to cache JS more aggresively, leading to much better performance.

After the work with Webpack we saw the following results:

Doc-CompleteFully—LoadedLoad timeFirst ByteStart RenderSpeed IndexFirst InteractiveTimeRequestsBytes InTimeRequestsBytes InCostFirst view3.993s0.344s1.3002098> 4.663s3.993s21740kb5.609s331,069kb$$

As I mentioned above, we noticed that on mid-range phones where users didn’t use a fast JavaScript parser, it could take up to 14s for the site to be interactive. Users could still get some functionality and scroll around … however, they were prevented from doing so by the cookie module. This was a particular issue on smaller screens but affected everyone.

To combat the issue we redesigned the module to be much smaller. We also set a timeout on the module so that it wouldn’t try to check for cookies or even render until after the dom was loaded.

As a result, we didn’t stop people from using the site until the cookie module was dismissed.

Issue 4: Large Dataset #

On a high-converting page we made a call on the frontend to get a large dataset that was over 1.3MB. This was a stop-gap: we ultimately needed to get this dataset not from a static JSON file but from a server that could take 35ms to compile the data we needed and make it available as a JSON object.

Instead of querying this API directly we built an intermediary service that cached the data set we needed as a static JSON file. The API also reduced the data down by eliminating extra data that we didn’t use.

This reduced our file to around 650KB.

Issue 5: Long-running API Calls #

The data we requested on the API endpoint mentioned above was mission-critical. It had to be available to our users, otherwise conversions would not be possible. Therefore we hadn’t set a timeout on the API call.

We were able to find a workaround to this issue by caching the data in the project on build. That allowed us to set a timeout so that if the connection dropped suddenly, a default data set that was fetched on build earlier could be displayed.

Issue 6: Large Number of JavaScript Math Functions #

Having an intermediary service allowed us to further reduce the JavaScript on the site.

The large dataset contained raw numbers down do several decimal places. But we only needed to provide 2 decimal places to visitors. Although it reduced the adaptability of the intermediary service we had built, we decided to perform rounding there instead. This freed up much more processing power for the main thread and further reduced times.

Issue 7: Avoiding Re-rendering #

I’m going to cover this more in-depth in another post, but we also managed to reduce page re-renders massively. We did this using the Context API to share data across several modules via State. Most of our functions were used in State as well. This meant that only the components consuming that data re-rendered when they were used.

Conclusion #

A recent performance test shows we have achieved the following:

Doc-CompleteFully—LoadedLoad timeFirst ByteStart RenderSpeed IndexFirst InteractiveTimeRequestsBytes InTimeRequestsBytes InCostFirst view1.787s0.223s1.2001.384s> 2.698s1.787s21301kb2.571s33438kbq$

I recognise these performance results are under optimal conditions and actual, not to mention percieved, results aren’t captured by these figures. But we also need some achievable goals that help us to stay motivated and focused.

We haven’t finished yet either … more work will continue to be done during the lifetime of this project.

This has been a great adventure, and it’s been exciting to see how we’ve gone to fully-loaded times of around 10s to 4, then down to 2.5, where it currently stands. I hope this post gives you some clues about how you can use new tricks and old to make your apps more performant.”

Read more articles about: javascriptreact

Comments

No comments yet. Be the first to comment!


“Wisest are they who know they do not know.”

— Jostein Gaarder