I recently revisited a post by Jeff Atwood which says that performance is a feature. According to him, slow websites offer a degraded user experience which ultimately affects the conversion. Conversely, Users, especially developers, love to use websites that are fast. I could very much associate myself with this. I tend to visit some websites more often because they load really fast. I click on AMP links on my Google search results because they load instantly. So, naturally speed is a competitive advantage. When we started hashnode.dev, our #1 goal was to offer superior performance compared to every other solution out there. I am glad to announce that we are almost there. There will be consistent efforts to boost the performance in future, but we have achieved some major milestones.
With hashnode.dev, we are offering developers a better way to create and run a personal blog. We win only if our users win. Therefore we have worked hard to make sure each blog gets a page speed score of 100. While the bloggers focus on creating great content, we focus on delivering content to their readers as fast as possible.
We have used various tools and techniques to deliver superior performance. In this article I am going to list some of my favourite ones. Check them out and feel free to let me know what you think. This is a super quick article (just like your blog 😉) -- so grab a cup of coffee and join me!
Let's get started!
No blocking JS or CSS
Hashnode powered blogs are built with Next.js. Even though we use code splitting, we still need to depend on a few external scripts. But there is no reason we should load all these scripts upfront. So, we use
defer on all external scripts. As a result the browser doesn't have to pause the rendering and fetch the scripts. It can happen asynchronously.
We also do something similar for CSS. We extract critical CSS and inline it. In case you are not aware, critical CSS is the minimal amount of CSS your web page requires in order to display the above-the-fold content properly. Once our webpage loads fully, we download the rest of the CSS asynchronously. As a result, there is no blocking CSS and JS delaying the rendering of HTML. As soon as the browser gets the response from our server, it can just start rendering without extra HTTP roundtrips.
Hashnode blogs are powered by a home grown CDN which makes sure that a piece of content is delivered from a location that is geographically closest to the reader. Since we use our own CDN, we cache the content for a higher period of time and keep it in cache even if it's not frequently accessed. So, the overall cache HIT ratio increases and your blog enjoys low TTFB value consistently.
You check out a report here which outlines the TTFB value of Emil Moe's blog. You can see that the TTFB is < 100ms for most locations. We are working on spinning up more DCs across the globe and will eventually achieve < 100ms TTFB value for all the locations. In short, we turn your dynamic blog into a static site and cache it for up to a month so that you see better cache HIT ratio.
Furthermore, all the static assets like JS, CSS, images have a
max-age of 1 year and are cached by the browser. So, there is no HTTP calls to download these resources.
Also, it's worth noting that we are able to terminate SSL at the edge locations while serving the cached content. Here is an article that explains how we generate SSL for free. It doesn't cover how we do it at edge locations, but you will get the idea.
Webfonts are necessary in some cases, but they also block the rendering of the page. If you defer the loading of webfonts, your text may appear unstyled for a brief moment and your visitors will see an FOUC. So, what do we do? We don't load any fonts at all and use system font which works out pretty well. 😃
Resize and compress images
It's also worth noting that we use webp format to load images whenever possible.
HTTP2 & gzip everything
We leverage HTTP2 for extra performance boost and gzip all the assets.
When someone loads your blog's home page, we preload certain assets e.g. a script file that is likely to be accessed. But we don't stop there. We also detect hover event and see if a user is about to click on a piece of article. In that case we prefetch that article even before the reader clicks on it. This increases perceived-speed of these article pages. Here is a demo of this technique:
In addition to the above points, there are many more small improvements that boost the performance. We pass total 21 audits on Lighthouse. You can see a full list here.
We still have some work to do in order to reach PageSpeed score of 100 on mobile devices. But I feel this is a good starting point.
If you are looking to create your own dev blog, there is no reason you shouldn't use hashnode.dev. Feel free to DM me if you need access. :)