Up to speed with web performance

On Thursday and Friday I learned all sorts of things about making sites faster at the performance.now() conference in Amsterdam. I frantically took notes at each talk, so here’s my summary of the event.

Performance has always been one of my main interests in web development: if you want a good user experience, speed is essential. The performance.now() conference was two days, with eight talks on each day. Some had lots of theory, others were case studies. The balance between different types of content was excellent. I thoroughly enjoyed learning more about resource hints, HTTP/2, rendering performance of UI and fonts, the web itself and how performance works in large organisations.

See also: the conference videos on YouTube.

Yoav Weiss, slide says HTTP2 to the rescue HTTP/2 was a recurring theme

Day 1

Steve Souders - Making JavaScript fast

The conference kicked off with the grand… eh godfather of web performance: Steve Souders. He talked about the issues that stand in the way of everything (the ‘long poles in the tent’), based on data from the HTTPArchive. The longest pole: JavaScript fatigue. Scripts are not just bad news for performance because of download time, but also because parsing them costs CPU time on our user’s devices. Steve recommended reducing it (check code coverage, budget 3rd parties), and for what is left: load it async (or better: defer), use <link rel=preload as=script> (or better: as a header) and double-check your assets are actually compressed.

Harry Roberts - Third-party scripts

Harry Roberts discussed third party scripts in four stages: understanding what the problem is, auditing for it, talking to internal teams and vendors and reducing the problem where possible (Harry’s slides). Third party scripts can affect user’s security, they can slow things down (or even be a single point of failure). For auditing, Harry mentioned Request Map, a tool that takes a Web Performance Test as input and creates a map. With the data in hand, he explained, you can go talk to the vendor or the scripts (note: they might lie). Or you can go to the department responsible for the scripts and have a polite meeting with them (‘ask questions, don’t demand immediate removal’). This sparked some discussion at the drinks afterwards: polite is probably good, but shouldn’t developers really be quite strong about reducing tracking? To mitigate third party performance issues, Harry recommended self-hosting, async loading where possible and preconnect to ‘warm up’ the connections to origins that are most important and that you’ll need frequently.

Anna Migas - Debugging UI perf issues

Anna Migas showed us how to impove what she calls ‘perceived load performance’ (Anna’s slides). If we want smooth interactions, we have to do our calculation work between frames. This is because we should ensure our users get not only 60fps, but possibly also 120fps, it’s a thing on iPad Pro and Razer devices. She exposed step by step how a browser builds a frame and shared great tricks for avoiding jank (offload tasks outside the main thread) and diagnosing its cause in Dev Tools (Performance, Rendering and Layering tabs). She also explained issues with certain interaction patterns, like animations (janky if wrong properties or too many), parallax scrolling (‘paint storms’), fixed elements (lots of repainting) and lazy loading images (content jumps). A great trick for improving performance of fixed or sticky elements: try to get browsers to load them in their own layer (can be forced by using will-change or transform: translate3D(0,0,0)). She did warn here: be mindful that more layers consume more memory, use them wisely.

Adrian Holovaty – Geeking out with performance tweaks

Adrian created Soundslice, which is a cleverly built web interface for sheet music. Because it displays sheet music, it displays complex combinations of notes. Hard to get performant, one might think. Soundslice works with a canvas that can (has to) do redraw very often. It has clever things like that it will highlight the currently played note. You can get kind of stuff super performant with very careful coding, Adrian showed. You do just the things required, not too often, etc. Being the co-creator of a famous library himself, Django for Python, he surprised some by saying he recommends not to use frameworks, as they are often full of things your app doesn’t need. Specifically this is true for front-end frameworks, as their redundant code gets shipped to and parsed by users. Food for thought, I heard many say in the break after. So, no big library, but a small set of 8 utility functions in vanilla JavaScript. Adrian also showed how requestAnimationFrame helped to not redraw too often. Google Dev Tools and Google’s Closure Compiler helped him ship better code. Other cool tricks: one global tmpList that gets reused every time a function requires a temporary array, singleton functions where possible to allow for a cache of fractions to avoid memory overuse, caches for expensive calculations and bitfields to use less bytes for saving meta information of notes (one byte per boolean instead of the default 8). Mind. Blown.

Adrian in front of slide with sheet music with one section marked Adrian shows sheet music

Zach Leatherman - Fonts

Those who had hoped to walk out of Zach Leatherman’s talk with the one, ultimate, bullet-proof solution to loading fonts fast, were out of luck (Zach’s slides). There is just too much to think about. Most solutions will be a trade-off in some way. But there’s lots to tweak. Zach walked us through improving font loading in a default WordPress theme. He showed lots of tools that can help us deal with what he called the biggest enemies of font loading: invisible text (because font not yet loaded) and moving text (because of fallback font getting replaced by the real thing). Reflowing content is annoying, so Zach proposed a new performance metric: ‘web font reflow count’. Good web font implementations try and keep this number low, while no web font implementations have it at zero. Waiting endlessly for final versions of all fonts is also not ideal. Reflow could also be avoided if fallback fonts look a lot like the webfonts, or if bold webfonts are fallbacked by synthetically bolded regular weights (which, Zach showed, Safari can do). He also talked about how to use preconnect (in some cases), self hosting, font-display (can have sensible results without JS), the Font Loading API (full control over what and when) and subsetting, with clear and helpful examples. I loved it and will review this site’s web font choices.

Natasha Rooney - Protocols

As someone who is just an ordinary front-end developer, I always find people who understand all about networking a bit intimidating. So was Natasha Rooney, even though she said ‘protocols are easy’. She works on protocols and networking standards at the W3C and IETF. In one of her first slides, Natasha showed the absolute boundaries of network speeds on a map. They are dictated by physics, ‘you can’t beat the speed of light’. To optimise that part of the web stack, the advice is to serve content from servers physically closer to your users (think CDNs). With awesome supermarket metaphors, she explained pipelining. She then went into the details of SPDY, Google’s protocol that later was the basis for HTTP/2, and QUIC, again a Google invention. QUIC may help solve head of line blocking issues by reducing latency. It could end up becoming standardised as HTTP/3. Both SPDY and QUIC run on existing protocols (TCP and UDP respectively), because middle boxes often disallow other protocols.

Andrew Betts – Fun with headers

When Andrew Betts worked in the Technical Architecture Group (TAG), he had to review proposals for new HTTP Headers, amongst, presumably, lots of other tasks. For this talk, he looked into the HTTPArchive to show us some headers. Those we should want and those we should not want. The ones to keep were restrictive headers like Content-Security-Policy, Strict-Transport-Security and Referrer-Policy (all for security) and permissive ones like Access-Control and client hints (full overview in Koen Kivit’s tweet). Andrew also looked a bit at the future. Particularly Feature-Policy looked pretty interesting and the idea of setting some of these headers in a JSON file served from a well-known endpoint.

Tammy Everts - What’s the best UX metric?

Tammy Everts shared with us what she thinks works best as a performance metric focused on user experience (Tammy’s slides). She started by sharing some history of performance metrics and what their problems are, then went into current metrics that aid UX. More modern metrics often start with ‘First’ (First Paint, First Interactive, etc), and SpeedCurve introduced metrics for important things on the page (like ‘largest background image rendered’). Tammy also said you can define your own custom metrics. Some companies do, like Twitter (‘Time to First Tweet’). Collecting measurements is one thing, but what to do with them? Tammy talked a bit about per page performance budgets, based on some of the metrics she showed before.

Day 2

Tim Kadlec - The long tail of performance

Tim Kadlec’s talk opened day two and it was about ‘the long tail’. That isn’t always the same people: it could be you on a flight, you who forgot to charge or you visiting a rural area. In other words: the real world. Tim showed how he worked with the MDN team to reduce loading times. A large part of page size was fonts. Choosing a system font instead of Mozilla’s brand sans-serif and subsetting their slab serif helped heaps. Talking more generally, Tim explained he had, in years and years of audits, never seen a case where the site couldn’t have improved making the core user tasks faster. There’s always assets, old code or other stuff that could be removed. And we can choose to do that, Tim explained, we can be proactive and provide a resilient and solid experience. Truly inspiring opening words. Practically, he recommended us to test on older devices and slower connections.

Tim Kadlec in front of slide with Steve Souders saying JavaScript is the main culprit for slow web pages Speakers quoting speakers: Tim Kadlec refers back to what Steve Souders said the day before

Yoav Weiss - Resource Loading

Yoav Weiss talked about what makes resource loading slow, how we can solve it today and how it will be better in the future. The problems: establishing a connection has many roundtrips, servers take time to process requests, starts are slower than necessary, browsers sometimes have idle time while figuring out which resources to load, when they’re loading they go in turns and then we sometimes load more than necessary (bloat). HTTP/2 helps with a lot, not everything. In the future, we’ll be able to use the wait time by using HTTP/2 Push, we should unshard domains (as it is an antipattern in HTTP/2) and take advantage of preload and preconnect headers (responsibly; adding it to everything messes with prioritisation and make things worse) and we can turn on BBR (See also: Optimizing HTTP/2 prioritization with BBR, via Tammy Everts).

Kornel Lesiński - Optimising images

Did you know JPEG images don’t contain pixels? Kornel Lesiński explained they have layers of abstract patterns instead. Together they form the image. Some parts contain details and others don’t, and by moving all non-detail data to the beginning of the file, you get this effect of low res slowly becoming hi-res, they load ‘progressively’. Progressive JPEGs have one issue: if you have multiple, they load one by one. In other words, as a group they’re not progressive. This can be mitigated by configuring nginx to send just the first 15%, then wait so that the network can do other things, then send the rest of the file. The doesn’t work on CDNs, they’ll serve static from cache, unless you use Edge Workers. Kornel also showed different future image compression techniques, the best of which just exist in browsers as video codecs. The trick to that? Load hero images in a <video> and you’re good to go, Kornel explained. Clever. And probably not for production.

Katie Sylor-Miller - Performance archeology

Katie Sylor-Miller (Katie’s slides) had this fantastic approach of marrying archeology (her background) with web performance, and with… Indiana Jones. The talk was, at the same time, a case study of Katie’s work at Etsy. The team’s hypothesis was that listing page performance improvements would lead to more conversion, so they surveyed, found the culprits and acted on them. It involved manual JS removals at first, but that was later replaced by a very nifty looking (in-house) solution called ‘Vimes’. It automatically finds which parts of the JS are never or rarely used. To avoid having to do this, Katie recommended to ‘architect for deletion’. Interesting findings from the research were that before/after difference got much larger on slower connection speed and that, yes, performance definitely affected conversion.

Jason Grigsby - Progressive Web Apps challenges

It’s been a couple of years since the idea of Progressive Web Apps was introduced. Jason Grigsby talked about some challenges in 2018 PWA land (Jason’s slides). One such challenges is the definition, it would make sense to have different definitions for developers (like Jeremy’s) and marketeers (Alex and Francis’ original definition, or Google’s, that has gone through a couple of iterations). If anything, the acronym is not for developers. There’s plenty of FUD around Progressive Web Apps, Jason explained, and they can do more than we think. Geolocation, Web Payments, an app shell architecture and offline support can all be part of a good PWA experience. Hidden challenges include appearing to use more storage than expected (security measures in browsers dealing with opaque responses) and designing for the intricacies of a seamless offline experience.

Scott Jehl - Move fast and don’t break things

While most speakers talked about how to make existing sites faster, Scott Jehl recommended us all to not make them slow in the first place. His technique of choice for this: progressive enhancement. It as effective many years ago and it is effective still, Scott explained. For performance, but also for accessibiliy and SEO. He discussed various techniques that fit well within this mindset: HTTP/2 Server Push (Filament already use it in PROD), inlining CSS (or the most critical subset of it if there’s a lot), the lazyload attribute that is in experimental Chrome (PR for lazyload against the HTML specification) and Service Workers on the CDN level so that you can A/B test without bothering users (if you do it on the client, consider this new metric: second meaningful content). Great conclusion: it is easier to make a fast website than to keep a website fast.

Michelle Vu - Performance at Pinterest

Michelle Vu runs the performance team at Pinterest. Her fantastic talk gave us insight in how they got and keep their site fast. The results were very positive: 30% faster got them 10% more sign-ups. Michelle explained that what worked for them is to pick a key metric, which in Pinterest’s case was a custom metric. She said it is helpful to tie performance metrics into business metrics. When measuring, it is helpful to look both at synthetic data (test runs) and RUM data (real users). Michelle showed they each serve a need: synthetic results are consistent and you can run the tests before deploying (e.g. upon code merge) and RUM data shows impact on real users. Michelle also showed a bit of their performance tooling: regressions are measured and there’s performance regression pager duty. So cool! The most important lesson I learned from this talk is that education is very important. Pinterest do this in their pattern library and document things like the experiments (how they were ran, results, etc) and the details of optimisations. Besides documentation, communication and in-person training was just as important to educate others.

Paul Irish - Closing keynote

Closing of the last day of the conference was Paul Irish. He talked about the state of metrics, specifically two kinds: visual metrics and interactivity metrics. Visual metrics are things like First Meaningful Paint, Visual Completion, Layout Stability (a.k.a. ‘layout jank’) and something Paul called Last Painted Hero, a measurement for the last paint of most important content (see also: Rendering Metrics by Steve Souders). Interactivity metrics are things like First CPU Idle and First Input Delay. Paul also showed how to measure these metrics with real users using a PerformanceObserver. The flow he recommends for this is to write code, test it synthetically (‘in the lab’), ship it, then validate it with real users (‘RUM’).

Comments, likes & shares

No webmentions about this post yet! (Or I've broken my implementation)