Reducing Request Payloads

Image description

When making a request to our API you may be surprised to learn that the majority of the response you receive from us isn't actually JSON data, it's headers.

In-fact an average request to our API results in a 1358 byte response but most of the time our JSON makes up only 448 bytes of that when performing what we call a full lookup containing the most data our API offers. That's a ratio of just 33% JSON to 66% headers.

Which is why we've gone through all the headers we send and removed ones that don't make sense to send in our API responses. Headers that explain encoding, cache times and how you can access the API using a newer HTTP standard will remain as these are important for compatibility and efficiency. But headers for both tracking and debugging our CDN (Content Delivery Network) have been removed.

A lot of these headers are not actually originated by us but are generated automatically by our CDN partner. These extra headers make perfect sense to include with a normal page load where the extra 910 bytes are considered minuscule compared to the almost 1MB size of most web pages today.

But for an API like ours where the JSON is only 448 bytes that extra 910 bytes taken up by headers doesn't make sense. So the end result is today we've managed to reduce those headers from taking up 910 bytes to using 359 bytes. This results in a total average payload size of 807 bytes. This is still sizeable but it's much lower than the previous 1358 bytes and when extrapolated over the billions of requests we handle it really adds up.

The end result of course is you get answers from our API faster as there is literally less data to be transferred. This change has been enabled on our v2 API today and there's nothing for you to activate or change, you should already be benefitting from it as you read this.

This change alongside our activation of HTTP/3 and 0-RTT that we mentioned in our previous blog post are all part of a general efficiency drive we're doing which has included the addition of a new North American server node, operating system upgrades, new webserver deployments and new server-side code interpreters.

One thing we've not mentioned in a blog post until now is we also recently rewrote our server-side database caching system which has resulted in a huge reduction in initialisation time. This is especially important because this piece of code gets initialised on every single API request and so any improvement in startup time has dramatic effects when extrapolated across our entire request load.

We hope this post was interesting, we do enjoy these deeper dives into what we're doing and hope to make these kinds of posts on a more regular schedule.

Thanks for reading and have a wonderful weekend!


Back