Blog Post

Benchmarking JavaScript Memory Usage

Is your website bogged down by JavaScript memory usage? This article explores the challenges of measuring memory usage and proposes a new way to collect data.

One of the things that is so challenging about the conversation around memory usage on the web right now is the sheer number of unknowns.

We haven't historically had ways of accurately determining how much memory a page is using in the real world, which means we haven't been able to draw a connection between memory usage and business or user engagement metrics to be able to determine what "good" looks like. So at the moment, we have no idea how problematic memory is, other than anecdotal stories that crop up here and there.

We also haven't seen much in the way of at-scale synthetic testing to at least give us a comparison point to see how our pages might stack up with the web as a whole. This means we have no goal posts, no way to tell if the amount of memory we use is even ok when compared to the broader web. To quote Michael Hablich of the v8 team: "There is no clear communication for web developers what to shoot for."

Because we don't have data about the business impact, nor do we have data for benchmarking, we don't have minimal interest in memory from the broader web development community. And, because we don't have that broader interest, browsers have very little incentive to focus on leveling up memory tooling and metrics on the web the same way they have around other performance-related areas. (Though we are seeing a few here and there.)

And because we don't have better tooling or metrics...well, you can probably see the circular logic here. It's a chicken or the egg problem.

The first issue, not knowing the business impact, is gonna require a lot of individual sites doing the work of adding memory tracking to their RUM data and connecting the dots. The second problem, not having benchmarks, is something we can start to fix.

measureUserAgentSpecificMemory

Chrome introduced a new API for collecting memory related information using a Performance Observer, called measureUserAgentSpecificMemory. (At the moment, there's been no forward momentum from Safari on adoption this, and Mozilla was still fine tuning some details in the proposed specification).

The measureUserAgentSpecificMemory API returns a breakdown of how many bytes the page consumes for memory related to JavaScript and DOM elements only.

According to research from the v8 team, 35% of memory allocation on the web is JavaScript related, and 10% is for representing DOM elements in memory. The remaining 55% is images,[1] browser features, and all the other stuff that gets put in memory. So while this API is limited to JS and DOM related information at the moment, that does comprise a large portion (45%) of the actual memory usage of a page.

An article by Ulan, who spearheaded a lot of the work for the API provides a sample return object.

{
  bytes: 60_000_000,
  breakdown: [
    {
      bytes: 40_000_000,
      attribution: [
        {
          url: "https://foo.com",
          scope: "Window",
        },
      ]
      types: ["JS"]
    },
    {
      bytes: 0,
      attribution: [],
      types: []
    },
    {
      bytes: 20_000_000,
      attribution: [
        {
          url: "https://foo.com/iframe",
          container: {
            id: "iframe-id-attribute",
            src: "redirect.html?target=iframe.html",
          },
        },
      ],
      types: ["JS"]
    },
  ]
}

The response provides a top-level bytes property, which contains the total JS and DOM related memory usage of the page, and a breakdown object that lets you see where that memory allocation comes from.

With this information, not only can we see the total JS and DOM related memory usage of a page, but we can also see how much of that memory is from first-party frames vs third-party frames and how much of that is globally shared.

That's pretty darn interesting information, so I wanted to run some tests and see if we could start to help establishing some benchmarks around memory usage.

Setting up the tests

The measureUserAgentSpecificMemory API is a promise-based API, and obtaining the result is fairly straightforward.


if (performance.measureUserAgentSpecificMemory) {
  let result;
  try {
    result = await performance.measureUserAgentSpecificMemory();
  } catch (error) {
    if (error instanceof DOMException &&
        error.name === "SecurityError") {
      console.log("The context is not secure.");
    } else {
      throw error;
    }
  }
  console.log(result);
}

Unfortunately, using it in production is challenging due to necessary security considerations. Synthetically, we can get around that though.

The first thing we need to do is pass a pair of Chrome flags to bypass the security restrictions, specifically, the ominously named --disable-web-security flag, as well as the --no-site-isolation flag to accurately catch cross-origin memory usage. We can pass those through with WebPageTest so all we need now is a custom metric to return the measureUserAgentSpecificMemory breakdown.


[memoryBreakdown]
return new Promise((resolve) => {
  performance.measureUserAgentSpecificMemory().then((value) => {
    resolve(JSON.stringify(value));
  });
});

That snippet will setup a promise to wait for measureUserAgentSpecificMemory to return a result (it currently has a 20 second timeout), then grab the full result, convert it to a string, and return it back so we can dig in.

To try to come up with some benchmarks, we setup the test[2] with that metric and the Chrome flags, and then ran it on the top 10,000 URLs (based on the Chrome User Experience Report's popularity rank) on Chrome for desktop and again on an emulated Moto G4. Some tests didn't complete due to sites being down, or blocking the test agents, so we ended up with 9,548 test results for mobile and 9,533 desktop results.

Let's dig in!

How much JS & DOM Memory Does the Web Use?

Let's start with by looking at memory usage of the top 10k URLs by percentile.

I don't know exactly what I expected to see, but I know they weren't numbers of this size.

At the median, sites are using ~10MB for JavaScript and DOM related memory for desktop URLs, and ~9.6MB for mobile. By the 75th percentile, that jumps to ~19.4MB on desktop and 18.6MB on mobile. The long-tail data from there is...well, a bit frightening.

To put this in context, remember that Chrome research puts JavaScript and DOM related memory at ~45% of memory usage on the web. We can use that to come to a super rough estimate of ~22.2MB of total memory for a single page on desktop and ~21.4MB for mobile at the median, and an even rougher estimate of ~43.1MB of total memory for a single page on desktop and ~41.3MB for mobile at the 75th percentile. That's a lot of memory for a single page within a single window of a single application on a machine that has to juggle memory constraints of numerous simultaneously running applications and processes.

It's well worth noting, again, that without context around the business impact, it's a bit hard to definitively say how much is too much here.

It's also unclear to me exactly how much memory is available for JS related work in the first place. The legacy API (performance.memory) provides a jsHeapSizeLimit value that is supposedly the maximum size of the JS heap available to the renderer (not just a single page), but those values are proprietary and poorly specified, so it doesn't look like we could rely on that to find our upper-bound.

Still, we can still use the results from our tests as rough benchmarks for now, similar to what has been done for other metrics where we don't have good field data to help us judge the impact. Adopting the good/needs improvement/bad levels that Google has popularized around core web vitals, we'd get something like the following:

That itself feels like a helpful gauge, but I'm a big believer that the closer you look at a thing, the more interesting it becomes. So let's dig deeper and see if we can get a bit more context about memory usage.

Correlation between memory and other perf metrics

As we noted early, measuring memory in the wild is pretty tough to pull off right now. There are a lot of security mechanisms that need to be in place to be able to accurately collect the data, which means not all sites would be able to get meaningful data today. That's a big challenge, as performance data always becomes more interesting if we can put it in the context of what the impact is on the business and overall user experience.

In lieu of this, I was curious how memory usage correlated to other performance metrics to see if any of them could serve as a reasonable proxy. Probably unsurprisingly given that we're focused on JavaScript and DOM related memory only, the correlation between any paint related metrics or traditional load metrics. It does, however, correlate strongly with the amount of JavaScript you send and with the Total Blocking Time. (The closer the correlation efficient is to 1, the stronger the correlation.)

It seems pretty obvious I guess, but if you're passing a lot of JavaScript, expect to be using a large up a lot of memory as a result. And, if your total blocking time is high, it's reasonable to expect that you're likely consuming a high-amount of JavaScript related memory.

Where does it all come from?

Next up, let's take a closer look of what that memory itself is comprised of.

In the memory breakdown, there's an allocation property that we can use to determine if the memory is related to first-party content, cross-origin content or a shared or global memory pool.

Any cross-origin memory will have a scope of cross-origin-aggregated. Any memory allocation that has no scope designated is shared or global memory. Finally, any bytes that don't have a scope other than cross-origin-aggregated is first-party memory usage.

It's worth stressing—attribution is based on frame (iframes, etc), not by URL of the JavaScript file or anything like that. So first-party memory usage here could still be impacted by third-party resources.

If we breakdown byte usage by where it's attributed, we see that 83.9% of that memory is attributed to first-party frames, 8.2% is attributed to cross-origin frames, and 7.9% is shared or global for desktop.

Mobile is very similar with 84.6% of memory attributed to first-party frames, 7.5% attributed to cross-origin frames and 7.9% being shared or global memory.

A pair of pie charts showing the data summarized in the prior paragraph.

What does memory usage look like across frameworks?

I hesitated on this, but I feel like we kind of have to look at what memory usage looks like when popular frameworks are being used. The big caveat here is that this doesn't mean all that memory is for the framework itself—there are a lot more variables in play here.

We have to look though because many frameworks maintain their own virtual DOM. The virtual DOM is quite literally an in memory representation of an interface that is used by frameworks to sync up with the real DOM to handle changes.

So naturally, we'd expect memory usage to be higher when a framework that uses this concept is in place. And, unsurprisingly, that's exactly what we see.

A chart showing the range for the framework related memory data that is summarized in the previous table.

There's a risk of looking at that table and immediately deciding that using React, for example, is terrible for memory while ignoring the other things we know. We know that memory usage is highly correlated to the amount of JavaScript on a page, and we also know that sites that use frameworks ship more JavaScript. So the dramatic increase in memory usage could be more due to the fact that React sites tend to ship a lot of code rather than any inefficiency in the framework.

Accurately zeroing in on the actual memory efficiency of a framework is a bit tougher because there's a fair amount of noise involved. One gauge that could be interesting as a rough indication of memory efficiency is too look at the ratio of bytes in memory compared to JavaScript bytes shipped. DOM elements do factor in to memory use, but given the weak correlation between DOM elements and memory usage and the high correlation between JavaScript bytes and memory usage, I think this gives us a rough, but relatively useful, benchmark. A lower ratio indicates better overall efficiency (less memory bytes per JavaScript byte shipped).

I'll be the first to admit: these results surprised me. React appears to have a lower memory to JS bytes ratio than the other comparison points. Of course, React also tends to result in a lot more JavaScript being shipped than the others (other than Angular) which is part of the reason why we see such high memory usage, even in the best case scenario. As always, the big advice here is whether you're using a framework or not, keep the total amount of JavaScript as small as possible.

There's another big caveat here—this data is memory usage based on the initial page load. While that's interesting in and of itself, it doesn't really tell us anything about potential memory leaks. Running a few one-off tests, memory leaks were very common in single-page applications (throw a dart at a group of them and odds are you'll land on one with a leak)—but that's a topic for another post.

Summing it up

Memory is still a largely unexplored area of web performance, but that probably needs to change. As we ship ever-increasing amounts of JavaScript, memory usage creeps up as well.

We still need more information to round-out the full picture. How much memory is actually available to the browser at any point in time? How does memory correlate to key business and user engagement metrics? What about memory usage that isn't related to JavaScript and DOM complexity?

While there may be challenges in getting this data for your site today using real-user monitoring, the same approach I took for the tests here—some Chrome flags paired with a custom metric—makes it possible for you to start pulling memory related data into your test results today and I would love to see folks doing just that so we can learn more about how we're doing today, what the implications are, and how we can start to improve.

  1. I'm guessing that images are a large part of the remaining 55%. I've written about images and memory in the past, but the basic gist is that to find the amount of memory each image requires, you take the height of the image multiplied by the width of the image multipled by 4 bytes. So, in other words, a 500px by 500px image takes up 1,000,000 bytes of memory (500x500x4).
  2. Thanks to Ulan Degenbaev and Yoav Weiss for being incredibly patient with me while I was trying to set these tests up and understand the results.
WebPageTest
Web Performance Optimization
Website Experience
This is some text inside of a div block.

You might also like

Blog post

Accelerating Detection to Resolution: A Case Study in Internet Resilience

Blog post

Understanding the 103 Early Hints status code

Blog post

Transforming Web Performance with Catchpoint’s Enhanced Website Experience Solution