When Harry Roberts tweeted he would be free to give a workshop in The Netherlands this week, Joël approached Xebia and Fronteers and made it happen. Almost 20 of us gathered in the Wibautstraat in Amsterdam and spent a whole day getting all nerdy about web performance.

Harry told us all about how to sell performance, how (and what) to measure and which tools to use. And of course, how to fix the problems you measure. He also went in to how all of this is impacted by HTTP/2 and Service Workers (spoiler: turn them off when performance testing). The workshop came with plenty of theory, which I liked, as it helped understand how things work in practice. We ended by scrutinising a number of actual websites using the things we learned.

Please find some of my notes below.

Selling performance

  • When trying to sell performance to others in your company, tune the message towards the audience. With performance, it is easy to get all excited about all the technical stuff, especially as a developer. Concurrent downloads in HTTP/2 may interest your audience less than the forecast of paying less for bandwidth.
  • Use stats. If your company does not keep them, you can also refer to stats of others, WPOstats is great.
  • Your users may have capped data plans, they may be in an area with poor network conditions. Think of the next billion internet users, they may not have fast iPhones and great WiFi. See also What does my website cost and Web World Wide.

What to measure

  • Testing on your own device with your own network helps and it is easy, but it is nowhere near as valuable as Real User Monitoring (RUM). There are services that help with this, such as SpeedCurve and Pingdom. Google Analytics also provides some rudimentary RUM tools.
  • Beware of metrics such as page weight, number of requests and fully loaded. It depends on your situation if and how those apply to your performance strategy.
  • Perceived performance is better to focus on, although it is harder to measure. Many automated tools require thorough evaluation. Human judgment is essential.
  • Speed Index is a good metric to focus on. It is a ‘user-centric measurement’: the number is based on visual completeness of a page.
  • You can use Speed Index’s median as a metric in Web Page Test by adding ?medianMetric=SpeedIndex at the end of your Web Page Test url.

The network and requests

  • TCP/IP comes with head of line blocking: if one of the packages fails to come through, the server will request it again and hold up other packages in the meantime.
  • The network is hostile, assume it will work against you.
  • Beware of the difference between latency and bandwidth. Latency is how fast a certain path can be travelled, bandwidth is how much we can carry on our journey. Compare it with a motorway: latency is how fast we can drive, bandwidth is how many lanes are available for us to use.
  • The Connection: Keep-Alive header will leave a connection open for other transfers to happen, preventing new connections (with new DNS looksups, handshakes, etc) having to start all the time.
  • High Performance Browser Networking by Ilya Grigorik is a great book and it is available online for free. It is very technical, the mobile section is especially great.
  • Avoid redirects, they waste your user’s time. This is especially true on mobile, where requests are even more fragile.
  • Beware of whether your URLs have a trailing slash. If you are not precise about where you link to, you can cause redirects to happen. For example, if your ‘About’ page is on, you link to and you’ve set up a redirect, you can save your user from this by just linking to the right path in the first place.
  • Elimate 404s, as they are not cached (this is a feature).
  • Make use of cache, do not rely on it.
  • Use compression, for example gzip.
  • Concatenate your files to circumvent the HTTP’s limit on concurrent downloads (stop doing this when using HTTP/2).

Resource hints to increase performance

In which was likely the most exciting section of the workshop, Harry went into resource hints: DNS Prefetch, Preconnect, Prefetch and Prerender. Web performance hero Steve Souders talked about these concepts at his Fronteers 2013 talk “Pre-browsing”. Browser support for resource hints has improved massively since.

  • If you use both dns-prefetch and preconnect, set the preconnect header first, the browser will then skip over the dns-prefetch header as it will have already prefetched the DNS.
  • Long request chains are expensive: this is what happens when you include a CSS file, which imports another, which then requests a font, etc. The browser will only know about the font when it has parsed the second CSS file. With the preload you can tell the browser to preload assets, so that it already has them when the second CSS file requests them.

Anti patterns

  • Avoid Base64 encoding as it has little repetition and therefore gzips badly.
  • Don’t use domain sharding (the practice of using different domains to allow for more concurrent downloads) for any assets that are critical to your site, such as your (critical) CSS.
  • Many of today’s best practices are HTTP/1.1 oriented. When you start serving over HTTP/2, beware some may become anti patterns.

The future

  • If you serve over HTTP/2, you can benefit from multiplexing and concurrency (download all assets at the same time), improved caching, much smaller headers and Server Push, which lets the server send files the user did not yet request
  • Service Workers can speed up things tremendously as you can use them to have more granular control over what you serve from where.

Although I was familiar with most of the concepts covered, I did pick up lots of things that I did not know about. There were also practical examples, and Harry explained everything with lots of patience and at just the right speed. If Harry is coming to your town with his performance workshop, I would recommend coming along.

Comments, likes & shares

No webmentions about this post yet! (Or I've broken my implementation)