Improving Perceived SPA Performance by Prefetching Critical Resources
I remember being intrigued by Smashing Magazine’s post this summer about using PJAX to effectively turn static markup websites into single-page apps (SPAs). Turbolinks even turned the idea into a library! But I was especially interested in the section about prefetching the content to improve perceived page performance. The idea is to make requests for a new page when a user hovers over a link rather than when they click. By the time they actually do click the link, the resources are already available, and the site appears to be much faster—at very little extra cost.
At Sift, we decided to roll out similar idea, but specifically focused on our console SPA. Instead of prefetching complete web pages, we wanted to prefetch critical resources to render a subsection of the console. This can diminish or even eliminate that distracting loss of context, where a section is temporarily replaced with a loading spinner, only to be replaced again when the new section’s resources become available. Check out this subtle yet wonderful difference going from our Search Page to our User Details Page:
Before:
After:
Notice how, in the second gif, the bar at the top (which we call the user bar) renders immediately along with the cards (themselves in a loading state awaiting their own resources) without the intermediate blank spinner page?
Here’s another, more apparent view, when switching between User Details Pages:
Before:
After:
By prefetching the resource(s) critical to rendering—in this case, the resource for the particular user—we can transition much more seamlessly while keeping our customers “in context”!
Implementation Details
Building the infrastructure for prefetching isn’t anything revolutionary; it really just requires three things:
Of course, since it’s pretty tightly coupled with resource fetching, each implementation might look a bit different. I’ll touch a bit on how we went about it at Sift Science, where we use React with Backbone Models, and you can build on it or adapt it to your own tech stack, if you like.
Establishing User Intent
To ease the additional load on our API, we need to be relatively sure that the user actually might click a link before sending off requests for the link’s critical resources. I.e., if a user drags their mouse quickly across the screen, hovering over 20 links in the process, we don’t want to fire off 20 (or 40, 60…) requests at once. We can address this simply with a timeout, but we want to make sure we don’t wait too long, since the time we wait to request is less time we save with the prefetch. 50ms should do the trick:
// our mouseenter event handler (evt) => { prefetchTimeout = window.setTimeout(() => { // do our prefetch }, 50); // if we leave the prefetch element before the timeout, don't prefetch evt.target.addEventListener('mouseleave', _.once(_onMouseLeave.bind(null, prefetchTimeout))); } // our mouseleave handler function _onMouseLeave(timeout) { window.clearTimeout(timeout); }
Resource Cache
Hopefully, if you’re building a large application, you’ve already implemented caching for your resources. Ours looks similar to this, where ModelCache.get
wraps our Backbone requests inside a Promise:
// if no model with this key exists in the ModelCache, a network request is sent ModelCache.get('myModelKey', MyModelConstructor, options) .then((myModel) => { // myModel, an instance of MyModelConstructor, is now available // to use as "data state" in our React Component });
Now getting our prefetch to work is a piece of cake—all we need to do is call ModelCache.get()
without a .then
—after all, we’re simply making the request and storing the result, not operating on it.
Integrating into your app
Since we use React, our ideal prefetch integration would look something like this, where PrefetchKeys.MY_SECTION
is a key that the prefetch uses to look up the relevant critical resources to request from ModelCache
:
<a className='link-to-my-section' onMouseEnter={prefetch(PrefetchKeys.MY_SECTION)} > My Section </a>
Given that, we can put our reusable prefetch handler together like this:
// prefetch.js import {ModelKeys, ModelMap} from 'path/to/model_cache_constants'; export const PrefetchKeys = { MY_SECTION: Symbol('mySection'), MY_OTHER_SECTION: Symbol('myOtherSection'), // ... }; // here's where we define how which prefetch will correspond with which resource requests const criticalResourcesMap = { [PrefetchKeys.MY_SECTION]: () => [ModelKeys.CRIT_RESOURCE_1, ModelKeys.CRIT_RESOURCE_2], [PrefetchKeys.MY_OTHER_SECTION]: (id) => [ModelKeys.CRIT_RESOURCE_3 + id], // ... }; export default (prefetchKey, ...args) => { var fetched, modelKeys = criticalResourcesMap[prefetchKey](...args); return (evt) => { var {target} = evt, prefetchTimeout; // safeguards to prevent multiple `get` calls and so that we don't error // if the component isn't mounted when this runs if (target && !fetched) { prefetchTimeout = window.setTimeout(() => { modelKeys.forEach((modelKey) => { let constructor = _getConstructor(modelKey); if (constructor) { ModelCache.get(modelKey, constructor); } }); fetched = true; }, PREFETCH_TIMEOUT); // if we leave the prefetch element before the timeout, don't prefetch target.addEventListener('mouseleave', _.once(_onMouseLeave.bind(null, prefetchTimeout))); } }; };
Dynamic Keys and Fuzzy Matching
The snippet above will get you most of the way there, but it adds a couple things to take care of something we haven’t discussed yet—dynamic keys. Many models can be statically referenced, i.e., a customer’s account record to show in their profile page. But many others are variants of a resource. Search results, for instance, would be cached per query. In our gif example above, we’re caching per user id, so referencing ModelKeys.USER
wouldn’t cut it. We’d need to generate the key dynamically.
That’s where the rest parameter in our prefetch function comes in handy. Each value in the criticalResourcesMap
is a function, so passing those arguments directly to them means that we can create keys however we want. In ModelKeys.CRIT_RESOURCE_3
above, we append an id.
Once we’ve generated our dynamic key, though, we can no longer do a direct lookup in the ModelMap
to get its constructor. That’s why we’ve included the _getConstructor
function above, which let’s say (a little naively), might look something like this:
function _getConstructor(modelKey) { return ModelMap[modelKey] || // assuming all model keys begin with the ModelMap key it's associated with ModelMap[Reflect.ownKeys(ModelMap).find((mapKey) => new RegExp('^' + mapKey).test(modelKey))]; }
Wrapping Up
This is about as far as we’ll take the prefetching idea in this post, but we’re actually not quite done. You may have noticed that we aren’t passing an options hash in our snippet’s get
request—something we’ll definitely want to include in our production-ready prefetch. Our get
call should look more like:
// ... let constructor = _getConstructor(model.key); if (constructor) { ModelCache.get(model.key, constructor, { fetchData: model.data, options: model.options }); }
This implies that we might want to store an array of model hashes in our criticalResourcesMap
as opposed to an array of cache keys. Then again, our most common cases might not use these fields, and it is kinda nice to just pass everything along as a string, so we could do some sort of key generation/parsing where the data and options are passed along as part of the cache key. This would make the above function become:
// ... modelKeys.map(_parseKey).forEach((model) => { let constructor = _getConstructor(model.key); if (constructor) { ModelCache.get(model.key, constructor, { fetchData: model.data, options: model.options }); } });
There are certainly other ways to accomplish this, and I’ll leave it to you to decide how to proceed from here (but please keep us posted in the comments!). In any case, I hope we’ve presented a solid argument for integrating some kind of prefetching framework in your own SPA. It’s like free performance!
Love frontend performance? We love you. Come fight fraud with us!