Supercharging our API with PHP 8.1

Image description

Today we've upgraded our PHP version from 7.3 to 8.1 for our v2 API. This is an upgrade we've wanted to do for quite a while, in-fact we've been running tests against 8.0.x versions of PHP for the past 12 months. It has taken some effort to upgrade our code due to the many changes between PHP versions and specifically the deprecating of old functions and changes to the behaviour of current functions

In addition to those required code updates we had to do a lot of performance testing due to the low latency nature of our API we're incredibly sensitive to code interpreter changes that could introduce performance regressions. And whilst PHP versions 8.0 and 8.1 introduce many great ways to improve performance including AVX instruction use, JIT compilation and improvements to OpCache there can be performance regressions dependant on the methods your code uses.

Often in coding there are a multitude of ways to complete the same goal, even a simple array iterating function can be implemented in many different ways with each having wildly different performance characteristics.

Thus we had to do a lot of performance testing. We've run billions of requests through PHP 8.x since it was released and through this testing we've identified and changed parts of our code where needed to get the best results. This work didn't just happen in the last month but has been an ongoing effort since November 2020 when PHP v8.0 was released.

And so today is the day that our v2 API is finally upgraded to a 8.x PHP branch and specifically v8.1.0 the latest and greatest version of PHP.

With this change we're seeing a steady 25% latency improvement over our previous code which directly translates into being able to handle more requests per second. But remember this isn't just down to us switching PHP versions this improvement also includes all the work we did to bring our code up to the PHP 8 implementation standard. Many of our code changes by themselves have improved performance simply by using newer or faster functions and methods.

One of the pitfalls when doing an upgrade like this is code debt. We now support four different versions of our v2 API and the oldest of these needed more work than our newest to even execute consistently under PHP v8.x. We also had to recompile some of our own libraries that we include within our PHP environment to bring them up to 8.x compatibility.

You may have read in a previous blog post how we had rewritten our caching library. The main reason for this was to bring support for builds of PHP 8.x although we were able to improve performance simultaneously just through the natural iterative design process.

At current only our v2 API is using the newest PHP 8.1 interpreter but we have done testing with the customer dashboard and other parts of our site including administrative backends and it looks promising for a full site rollout over the next few months.

So that's todays update, thank you for reading and we hope everyone is having a great weekend.


Reducing Request Payloads

Image description

When making a request to our API you may be surprised to learn that the majority of the response you receive from us isn't actually JSON data, it's headers.

In-fact an average request to our API results in a 1358 byte response but most of the time our JSON makes up only 448 bytes of that when performing what we call a full lookup containing the most data our API offers. That's a ratio of just 33% JSON to 66% headers.

Which is why we've gone through all the headers we send and removed ones that don't make sense to send in our API responses. Headers that explain encoding, cache times and how you can access the API using a newer HTTP standard will remain as these are important for compatibility and efficiency. But headers for both tracking and debugging our CDN (Content Delivery Network) have been removed.

A lot of these headers are not actually originated by us but are generated automatically by our CDN partner. These extra headers make perfect sense to include with a normal page load where the extra 910 bytes are considered minuscule compared to the almost 1MB size of most web pages today.

But for an API like ours where the JSON is only 448 bytes that extra 910 bytes taken up by headers doesn't make sense. So the end result is today we've managed to reduce those headers from taking up 910 bytes to using 359 bytes. This results in a total average payload size of 807 bytes. This is still sizeable but it's much lower than the previous 1358 bytes and when extrapolated over the billions of requests we handle it really adds up.

The end result of course is you get answers from our API faster as there is literally less data to be transferred. This change has been enabled on our v2 API today and there's nothing for you to activate or change, you should already be benefitting from it as you read this.

This change alongside our activation of HTTP/3 and 0-RTT that we mentioned in our previous blog post are all part of a general efficiency drive we're doing which has included the addition of a new North American server node, operating system upgrades, new webserver deployments and new server-side code interpreters.

One thing we've not mentioned in a blog post until now is we also recently rewrote our server-side database caching system which has resulted in a huge reduction in initialisation time. This is especially important because this piece of code gets initialised on every single API request and so any improvement in startup time has dramatic effects when extrapolated across our entire request load.

We hope this post was interesting, we do enjoy these deeper dives into what we're doing and hope to make these kinds of posts on a more regular schedule.

Thanks for reading and have a wonderful weekend!


Introducing support for HTTP/3 and 0-RTT

Image description

Today we've added support for HTTP/3 and 0-RTT across our entire website including all versions of our API. To explain why briefly, these technologies enable us to decrease access latency for establishing secure connections and the resumption of prior secure connections with dramatic results.

HTTP/3 (with QUIC) Explained

HTTP/3 is built using the QUIC protocol which uses UDP instead of TCP and through its ingenious implementation reduces the back and forth chatter between your client and the server you're requesting content from substantially.

Traditionally HTTP requests are initiated by a client sending a TCP SYN to a server which answers by sending back a TCP SYN + ACK then your client sends another ACK followed by a TLS request and a HTTP request which are replied to with a TLS setup response and a HTTP response.

HTTP/3 does away with most of this. Instead the connection is initiated by a client sending both a QUIC request and a HTTP Request simultaneously and receiving back a QUIC response and a HTTP response simultaneously. This reduces the back-and-forth required by traditional handshakes by an entire round-trip.

0-RTT Explained

While HTTP/3 with QUIC enables very fast initial connection establishment by removing an entire round-trip off the handshake process 0-RTT takes this a step further with connection resumption.

This is where those prior handshakes that were negotiated through HTTP/3 can be reused without renegotiation. And the way this works means your client can send a new request for API data in its very first round-trip to our servers.

Performance Implications

The end result is a much faster TTFB (Time to First Byte) and a fast conclusion to the entire API request followed by much lower latency for subsequent requests.

Latency is very important to us because it can be the determining factor in whether our API is deployed in a specific scenario or not. If our API is too slow for where it's needed we could lose a potential customer because of it.

And while we launch these new features today we're currently in the midst of an infrastructure upgrade that will bring improvements in the future too. This morning we deployed a new web server on one of our nodes (NYX) which offers us both new capabilities and opportunities for performance gains across our site and API. It also affords us tighter integration with our CDN (Content Delivery Network) by better matching their capabilities. To put it simply, there is more yet to come.

If you've been using HTTP/1.1 or HTTP/2 to access our API before now there is no need to worry as you still can. For those who already have or are willing to upgrade their clients to support HTTP/3 with QUIC we welcome you to do so, the latency benefits are too good to ignore especially if you're performing millions of requests to us on a regular basis.

That's all the updates we have for you today, thanks for reading and have a great week!


New North American Server Node Introduced

Image description

As the service has continued to grow in North America we've continued to invest in new infrastructure to serve those users. Adding new servers to a region doesn't just enable more customers and increased performance, it also provides redundancy against server failures and network malfunctions.

For our new LETO node we've chosen a new datacenter in the United States and we're running this node from what is now the most powerful server in our cluster utilising AMD's EPYC 2 Rome architecture. Previously we've placed one other node in the US and two in Canada run by two different companies, this LETO node is now using a third company.

It's important to diversify infrastructure like we are doing, utilising not just different datacenters but placing them in different regions and having them run by different companies as it's possible for a singular entity to have unexpected downtime. It was only a few months ago that the biggest host in Europe had a total infrastructure failure lasting an entire hour, something we were insulated from due to our use of many different datacenter partners.

As we mentioned above Leto is now the most powerful node in our cluster. In node terms it's the equivalent of two previous North American nodes put together and we intend to make this our new baseline of performance when we upgrade our previous servers or add new ones.

We're still planning a future expansion into Asia with 3 server nodes but due to the chip shortage and the resultant high server prices we've as yet been unable to execute on that. We continue to look for good options though and will take this step when the time is right.

Thanks for reading and have a great weekend.


v1 API changes

Image description

Today we've made a change to our v1 API (which we sunset and have not supported since March 2020) that essentially replaces it with a translator for our current v2 API. This means for the users who are still making requests to our v1 API you will now actually be making requests to our v2 API and having the result altered to use our v1 format.

This does have a performance penalty of a few milliseconds which is why we've waited to do this now when 99.9% of all our API requests are already made natively to our v2 API. Until now we've maintained the v1 API in a minimal fashion, it was functional but lacked many of our new features and even some of our newer data wasn't available.

With this change the handful of paid customers and a few hundred free customers who have been using the v1 API for years can still do so and it reduces our technical debt in that we no longer need to update the v1 API if we make database scheme changes for instance.

If you're still using the v1 API we would highly recommend you update your implementation to target our v2 API as not only will it be slightly faster in answering your queries but you gain access to a lot more data and features such as safe connection types, Custom Rules and our CORS feature.

Thanks for reading and have a great week!


Website Interface Refresh

Image description

Today we've refreshed the website with a renewed user interface. This new design brings what we call our glass theme found on our homepage to all the pages of our site for both our slate and snow themes.

Image description

Alongside giving our pages a more modern appearance it serves to bring design harmony and consistency. We've loved the topological map design featured on our homepage and we're very happy to now have that available on all our pages.

With this change we've also worked on site design from a functionality standpoint. We've taken extra time to tweak the way our content is displayed across the entire site for smaller screened devices.

Prior to today up-to 25% of your screen may be wasted on empty margins if you were using a phone, tablet or other small display. Now the content can use up-to 98% of your display with only the very smallest of margins making our pages more usable on smaller displays.

Thanks for reading and we hope you love the new design as much as we do.


Custom Rule Enhancements

Image description

Today we've enhanced the custom rules feature found within the customer dashboard to support two new conditions and to display the last modified time on your rules thus helping you to keep track of why a rule was made.

Image description

The new last modified time looks like the screenshot above, we were able to fit in the modified time without altering the height of the rules which is important when you have many and need to scroll through them quickly.

When saved or toggled whether individually or when using the global controls the last modified time will update on the page in real-time.

In addition to this we've added two new condition types which are Greater than or equal to and Lesser than or equal to. Both of these conditions join our previous Greater Than and Lesser Than conditions but allow slightly expanded functionality so you can use one condition to cover two scenarios and without needing to test how our interpretation of Greater or Lesser is implemented.

We've also made some efficiency changes to how custom rules are applied within our API resulting in lower response times to your queries especially when you use many conditions that compare numbers in your rules.

Support for these new conditions have been applied to all v2 API versions starting from June 2020 all the way to the current August 2021 dated version so you can use the new condition types immediately and without changing your selected API version.

Thanks for reading and we hope everyone is having a great week!


Automatic Account Deleting

Image description

For as long as there has been data to keep there has been data kept for far longer than necessary. This is no less true than when it comes to technology companies. In-fact today tech companies hold so much information about people they may know you better than you know yourself.

And while we're not a large tech company holding lots of private information we do hold some information generated by our customers. Things like your usage of our service and where you're deploying it, the email address you signed up with and some limited payment information if you purchased a subscription.

These are things that we only really need while you actually use our service which is why we recently introduced a self-deletion feature which enables you to both close your account and erase all the associated data we held about you.

Today we're taking this a step further by automatically removing unused accounts. After 30 days from the creation of your account if you've not used it at all (meaning you never logged into the dashboard or made a single query to the API using your API Key) we will schedule your account for deletion and notify you via email.

When scheduled you'll have 15 days to cancel it but these deletions noted above are for completely unused accounts so there shouldn't be much need to reverse the scheduled deletion as you can still signup again when you do need the service.

In addition to this we will be removing inactive accounts after 1 year from their last moment of activity. So if you've logged into the dashboard or made a single API request even once we consider that activity which would push back any scheduled deletion by a year. Similarly to the unused account deletions when scheduled you'll be notified via email and have 15 days to cancel the deletion.

We're making these changes because we think user privacy and the control of your data has been headed in the wrong direction across the tech industry and we want to do our part to nudge the line in the right direction. It's your data and you should always be in control of it and we should turn back time and make it as if you've never used our service when it's clear you haven't needed it in a while.

Thanks for reading and have a great week.


Detection of iCloud Private Relay

Image description

With the release of iOS 15 by Apple a new feature has been enabled for paid subscribers of their iCloud storage product called iCloud Private Relay and it's essentially a VPN service for the Safari browser on iOS 15 and macOS Monterey.

We've been monitoring the service throughout the beta of iOS 15 and we've determined that Apple is using three content delivery networks as partners for this feature which we've been able to detect without issue. Today we've enabled this detection on our API which means by default users of iCloud Private Relay will now be detected as VPN's when visiting your sites and services.

We know that since this is baked into iOS 15 and macOS Monterey it may become a popular service amongst your users and so you may not want to block users from accessing your sites and services when they use iCloud Private Relay.

Due to this we've added a new custom rule to the dashboards rule library (Big Business -> Allow iCloud Private Relay) which when enabled will allow these users to bypass being detected as VPN's while keeping other VPN services blocked. We will say that since iCloud Private Relay is a paid service and most Apple users do not pay for iCloud you may not deem it necessary to whitelist the service but we've made the rule available to you just in case.

Thanks for reading and have a great week.


Expanded Account Controls

Image description

Today we're introducing a new feature to the customer dashboard which enables you to close your account and erase all the held information we have surrounding your usage of our service.

The reason we've done this is we strongly believe your data belongs to you and just because you've provided us access to some of it doesn't mean you shouldn't be able to revoke that access on your terms.

For many years now you've been able to make a request to our support team to have your account erased but we felt closing an account at a service should always be as easy as it is to open an account and that includes the removal of all your data.

Image description

So within the Dashboard you'll now see a new button in the top right corner of the settings tab as shown above which when pressed will begin the process of closing your account and erasing your data.

From the moment you click that button you'll have 30 minutes to either export your data if you haven't already done so or to cancel the scheduled closure as shown in the screenshot below.

Image description

To be clear this is a full deletion that actually erases your data held on our servers and not just an account disablement. This means you can signup again using the same email address as you used previously because we will have no knowledge of it in our system once the deletion occurs.

It is our hope that we're exceeding the standard set in our industry for not just data portability but account control and data ownership, we truly believe in providing a great frictionless service that doesn't just meet the letter of laws and regulations but the spirit of them too.

We hope everyone is having a great week and thank you for reading!


Back