New European Server Node Introduced

Image description

Today we've introduced a new node to our cluster called THEA which increases our European capacity by 17% in request terms but actually increases our load capacity by about 30% for that region in processing capacity.

This is running the same hardware configuration as our LETO node we introduced in November last year for North America which means it's running one of the latest EPYC processors from AMD which features a very high thread count and Instructions Per Clock (IPC).

We've done this for three main reasons.

Firstly request load has been steadily increasing over the past few months which means we needed to expand our footprint to support new customers. We did that already for North America late last year with LETO and now in Europe with THEA.

Secondly we have been planning to migrate our server nodes to higher specification servers anyway. This new baseline includes using the latest generation of high core count processors from AMD and PCIe based NVMe flash storage. Both LETO and THEA meet these new criterias.

Thirdly as you may have read about yesterday the attacks against us are increasing in frequency and severity. This reality means it's important we have extra capacity beyond what is merely required to run our service.

And while we do heavily rely on our CDN partner CloudFlare to scrub attack traffic for us having our own infrastructure be able to withstand some of the attack traffic we're receiving still plays an important role. They can't always react fast enough or immediately decipher which of our traffic is legitimate and which is malicious due to our service being an API that is accessed by headless servers for the vast majority of requests.

So that is our announcement for today. We will be announcing a new feature early next month so make sure to check back for that. Until then, thanks for reading and have a wonderful week.


Major service disruption

Image description

Today between 12:25 PM and 1:15 PM GMT we suffered a major outage. At its peak just over half of all traffic sent to our servers did not receive any kind of response.

This was due to a very large attack on our infrastructure that didn't trigger our anti-DDoS protection due to the way the attack originated from a very large number of source addresses and created traffic similar to our legitimate customers. In addition to this one of our server nodes was offline before the attack began due to an unrelated fault which removed 25% of our North American cluster capacity.

The attack came to an end when we were able to mitigate the attack manually by engaging certain controls at our CDN partner and that immediately brought service back into normal operation.

With attacks against the service becoming more frequent we will be spending even more time looking at our mitigation strategy, today we were slow to react to this attack because our automatic system didn't engage and when trying to deal with it manually we found it difficult to pinpoint which addresses were launching the attack amongst our normal traffic.

Although we saw our traffic was several times greater than normal we couldn't identify quickly which addresses were part of the attack and which was legitimate customer traffic due to the attack purposefully mimicking legitimate requests to our service.

If we have anything more to share about this attack we will update this post. Until then, we are very sorry this occurred and we will strive to do better.


Introducing Managed Rules

Image description

Almost one year ago on January 20th 2021 we introduced a new feature called the custom rule library. This was a new menu within the customer dashboard that contained pre-made rule templates that you could import into your account and then tailor to your specific needs.

This has been a very well received feature that lead to a huge increase in the usage of custom rules by our customers throughout last year.

But there has been something lacking. What happens when you import a rule from the library that needs to change over time. For example lets say you want to create a rule that applies to a specific companies network but over time their network will expand and the rule may no longer encompass their entire infrastructure.

That is where managed rules come in. We still have the same rule templates as before which we're now calling self-managed rules but in addition to that we've added what we call proxycheck.io managed rules.

Image description

As the above screenshot shows the rule library now has new buttons in the top for displaying self-managed or proxycheck.io managed rules. We want to be clear we're still committed to self-managed rules and in-fact every managed rule we create going forward will have a self-managed version available for easy templating.

To explain how it works, when you add a managed rule to your account we will control the conditions that trigger that rule for you. Whenever we update that rule in the library your saved version will also be updated automatically. You still get to name the rule and alter the rule outputs as we only manage the conditions for the rule. Below is a screenshot showing how an expanded managed rule looks in the dashboard.

Image description

We've made some of the global control buttons for managed rules blue so they're easier to notice within the dashboard interface while self-managed rules will continue to use pink global control buttons.

If we ever remove a managed rule from the library that you have saved then the rule you have saved will automatically transition to become a self-managed rule that you can fully modify. At present we haven't had any need to remove a rule from the library but it may happen in the future for instance if a company ceases to exist a rule targeting that company may no longer be needed.

Like all our rules you can import and export these easily and you can even export a managed rule, edit it in your favourite text editor and import it back to your account as a managed or self-managed rule instead.

Managed rules are available immediately for all accounts using our v2 API versions dated 2021 or newer.

Thanks for reading and happy new year!


Our 2021 Retrospective

Image description

At the end of each year we like to look back and discuss some of the significant things that happened with our service including milestones, new features and improvements.

So before we get to this year lets start with evaluating a major feature we introduced at the tail end of last year called burst tokens. Since we introduced this feature we've had many customers tell us it has provided them with the confidence to make a purchase by quelling their usage anxiety. Before this feature customers had to guess what they thought their usage would be in the future and many found this difficult.

Image description

The cushion provided by burst tokens has helped to alleviate this decision burden and as a result we've seen an increase in free customers converting to paid plans.

In late December last year we also introduced a major update to our Cross-Origin Resource Sharing (CORS) feature enabling the use of wildcard sub-domains and the addition of an API endpoint to alter your domains in an automated way.

Image description

These changes helped greatly expand the usage of CORS by 191% compared to 2020 and many customers told us the wildcard support was the main reason they began using this feature due to the speed in which they could now deploy it across their entire domain with a single entry.

Last year saw us finally launch North American nodes in December 2020 and we added two more this year in January 2021 and November 2021. We've seen our US based traffic steadily increase throughout the year thus the introduction of several new servers.

Traffic in general doubled this year over 2020 with the majority of that originating in North America. Our infrastructure has held up great and we are in the process of swapping out older nodes for newer hardware. The newest server we introduced in November 2021 (LETO) is now our most powerful server node and has become the new baseline for what we procure going forward.

In addition to growing our physical infrastructure we also made many investments in our virtual infrastructure. We've been able to handle the influx of customers and their traffic without incident while maintaining fast database coherency within our cluster. We use a custom database and cluster architecture in part to maximise our hardware and offer the most affordable pricing by not relying on expensive commercial solutions.

At the beginning of the year we introduced a new change log interface featuring color coded categories for the different kinds of changes we make.

Image description

This has been a joy for us to use as we take great satisfaction in detailing the work we do to make our service better for you.

One big change we did to the Dashboard this year has been the automatic refreshing of the positive detection log. We saw based on our analytics that many customers liked to leave the dashboard open for long periods of time so we looked at ways to improve this through live updating. We followed this up recently with a realtime QPS (Queries Per Second) display which has been well received.

A huge feature we introduced in 2019 was Custom Rules. This is the feature that enables customers to fully tailor how our API responds to their queries. And at the very beginning of this year we built upon this feature with a Custom Rule Library which currently contains 26 pre-made rules which you can import to your account and then edit. Since introducing this library we've seen rule use by accounts increase by 267% when compared to 2020.

Image description

We found rules to be so empowering for our customers that we wanted to enhance it further and so we added the ability to import and export individual rules and we increased the quantity of rules that our customers can have enabled at any one moment. Various UI improvements were made to make rules easier to create and manage and we also introduced new condition types and API provided value options.

This year we expanded the control you have over your account by enabling you to delete your account using a button within the Dashboard. Prior to this you needed to contact support to have your account and all associated data removed from our service which we felt was unduly burdensome.

Image description

We believe it should always be as easy to leave a service and take your data with you as it was to signup in the first place. In addition to these manual account controls we also introduced automatic account deleting for when it's clear an account hasn't been used and there's no good reason for us to keep your data any longer.

When it comes to the website we mostly do pruning and tweaking. But in August this year we made a dramatic change in the introduction of a dark mode. You may even be using it right now to read this very post! - This feature took a lot of time to get right but we're very happy with it.

In addition to the dark mode we followed this up with an overall design overhaul we call Glass which introduced our topological map background to all our webpages and changed our Raleway font to normalise number heights.

When it comes to the API and general technical advances. We transitioned our v1 API to become a v2 API proxy. We introduced support for HTTP/3 with QUIC, we updated our API backend from PHP 7.x to 8.1.x and we significantly reduced our payload sizes through header pruning.

In addition to all of those changes we introduced two new dated versions of the API that provided more data in our API results like organisation names and operator data. And speaking of operator data..

Image description

Operator data which includes detailed data cards like the screenshot above were one of the major features we introduced this year. We now have more than 50 VPN providers profiled in this manner with more being added weekly. This has been a huge boost for our customers who rely on accurate VPN data, just being able to specifically say which VPN service an IP is being operated by is extremely conclusive and thus drives decision making confidence.

And so that has been our 2021 highlight reel filled with lots of growth, new features and improvements. We did leave out some things for brevity like our PHP client library gaining CORS and multi-IP checking support, some of the specific UI enhancements we did across the site, small-screen device usability improvements and UTF-8 support on the API among others but the things we felt were most important got a blog post and a full mention above.

Traditionally we have made any pricing changes to the service in January. We didn't do that in 2021 but we may make some pricing changes in 2022. However any changes we make that increase pricing won't apply to any current plan you're subscribed to as the price you began your plan at is the price you pay until you change plans.

In closing we hope everyone had a great year like we did. Thanks for reading and happy holidays!


Keeping your account safe

Image description

In todays post we want to share with you some tips that will keep your proxycheck account secure. We're doing this because we're seeing an uptick in accounts being taken over by malicious actors and in the volume of accounts we're having to disable for breaking our terms of service.

So without more preamble let’s get into it!

Keeping your API Key secret

This is the first line of defence to keeping your account from being breached. We issue a 24 character long API key to every account where each character has 36 different possibilities which results in 22 undecillion key permutations. This makes our keys practically impossible to brute force.

But this built in security by way of the key length and complexity means nothing if your key is not kept secret. The number one way our accounts get compromised is due to the key being leaked, usually in source code through publicly accessible code repositories and key misuse for instance trying to use your private key in public facing code.

If you work on an open source project that integrates our service you should always make sure the key is being included from a file that isn't included in your main project or loaded from a database so it won't be inadvertently shared within your code repo.

And remember when making queries to our API you can use TLS encryption, since all server-side requests must include your key this is the best way to secure your key from MITM attacks.

Secure your account with a password

When you signup you'll be emailed a link to login to your account and one of the first things you should do is create an account password. This will still allow you to login using your API key but it will require the password in addition to the key. Setting up a password also enables logging in using your email address.

And of course don't reuse a password you use somewhere else because that will open you up to credential reuse attacks should we or another site you log into be compromised. We strongly recommend the use of a password manager which can generate randomised passwords for each website you signup for.

Enable two-factor authentication

In addition to setting a password you can enable two-factor authentication which is essentially an extra password you enter when logging in that changes every 30 seconds making it difficult for an attacker to obtain.

You can use web based two-factor authenticators to generate these passwords but we strongly recommend using a separate physical device like a smartphone to generate them. Many password managers also include two-factor capability and we fully support the industry standard method for these called TOTP (Time-based One-Time Password).

We have chosen not to offer SMS based two-factor support because it's not secure enough. This method of two-factor is vulnerable to social engineering of the phone network staff who may issue an attacker a sim card with your number on it allowing for the attacker to intercept your two-factor codes.

To encourage the setting of a password and the enablement of two-factor authentication we offer customers two extra custom rules in addition to their plans provided rules.

Pay attention to email alerts

Many actions within the Dashboard cause email alerts to be sent and you should pay attention to these, they may give you an early warning that someone other than yourself has gained access to your account.

We send alerts for the following reasons:

  • You've logged into the dashboard from a new IP Address
  • You've set or changed your account password
  • You've enabled or disabled two-factor authentication
  • You've changed your email address
  • You've signed up for or cancelled a paid plan
  • You've generated a new API Key

And of course make sure our emails are reaching your inbox and not being caught in your spam filter.

Keep your email address upto date

If we need to email you for any reason and we're unable to do so your dashboard will show a notification at the top warning you of this. It's very important you then update your email address because we may disable your account if we're unable to contact you.

While it's rare we disable accounts for this reason there have been occasions where we have had to disable accounts to get a customers attention about an important issue with their usage of our service.

Don't use temporary email services

Like we stated above, it's very important we're able to contact you for specific account related issues. Due to temporary email services being as their name implies temporary it means we can't contact you after you signup for our service. It is for this reason we have an item in our terms of service regarding the use of temporary email addresses.

If you're found to be using a temporary email even a long time after you initially signed up the account will be disabled. You will then have one year to contact our support to have the account re-enabled so that you can change the email address. If you don't contact us within that year the account and all associated data is automatically erased.

Something else to note about temporary email services, many of them don't have any kind of account system and the inboxes of their temporary addresses are accessible in a public feed. This can put your account at risk of takeover.

Don't create more than one free account

This is the number one reason that customers lose access to their accounts. And in-fact just this year alone we've had to disable thousands of accounts. We offer a very generous free tier where every feature is available in full and only the quantity of queries, custom rules and the burst tokens you receive are dictated by if you pay and by how much you pay.

But still we face free account abuse. Some users even create hundreds to thousands of free accounts to avoid paying for service and this is something we cannot and do not tolerate. If you're found having more than one free account you're risking all of the accounts you have, they could all be disabled at a moments notice and at any time in the future.

Our stipulations here are very simple, you can have as many paid accounts as you want but the moment you create multiple free accounts you're in breach of our terms of service.

And while we have allowed some customers with multiple free accounts to keep them this is usually because their cumulated queries across all their free accounts remains below 1,000 per day or they've contacted us first to ask permission and provided a reasonable circumstance which we accepted.

But in general we do not allow multiple free accounts and you should always follow our terms of service.

Don't commit financial fraud

Although this is rare we do sometimes face financial fraud where someone purchases service using stolen payment information or they use legitimate payment information and later issue a chargeback through their bank.

Both of these issues cost us and all merchants a lot of money. Many people aren't aware of how financial crime impacts the costs of goods and services that they buy but it does, we have to factor in the cost of payment insurance and the accumulating losses due to chargeback fees and employee time spent collecting and submitting evidence for financial crime investigations.

We take a very hard stance on this, if we suspect you're using fraudulent payment information or you issue a chargeback with your bank we will refund the subscription and disable your account. There are no exceptions to this.


So that's our full guide to keeping your account safe and secure. If you ever need help with your account please don't hesitate to contact our support. Even if your account is disabled you have a full year to contact us to recover any of your data or rectify the circumstance that lead to your account being disabled. Stay safe out there and have a wonderful week.


Realtime QPS Display

Image description

Today we've added a new feature to the dashboard which displays your queries per second in real-time. This is something we've wanted to add for a while but it wasn't until we recently rewrote our per-node caching system that it was possible to deliver on this feature.

And that's because with the billions of requests we handle trying to sample incoming queries can itself affect your maximum requests per second. Thus having a high performance read-through cache that can provide valid snapshots of rapidly changing data without slowing or denying changes to that same data was paramount to making this feature happen.

Image description

Above is a little gif we made showcasing the new QPS graph found at the top right of the stats tab within the dashboard and available to all customers with an account.

We think we've been able to create a beautiful and unobtrusive live display of your queries but if you do find it distracting you can click on the display to pause it. Like our other play/pause mechanics your choice will save to your browser and be maintained across visits.

So that's what we have for you today. We are working at the moment on a lot of backend changes but as you can see many of those do result in frontend improvements. This real-time display of your queries would not have been possible without the work we did on our caching system which itself was initiated as a result of our move to PHP 8.x that required various things to be rewritten.

We hope you really like the new display and as always thanks for reading and have a great week.


New API version, curated VPN operator data and more!

Image description

Today we're introducing a new category of data to our service called operators. This differs from our previously available providers data as it contains data about who is actually operating an address as opposed to who owns it in the internet address registry.

Adding this extra data is important because it adds context to our responses. Previously If you checked an address and it came back as a VPN you would only receive information about the registered owner of that address. The problem is the owner of that address is rarely the company responsible for running the VPN software from that address.

That is where our new operator data comes in. When checking an address that we know is being operated by a VPN company we will show that extra operator information in both our v2 API result and on our threat pages.

The data we'll be exposing through the API includes the operator name, the level of anonymity they offer, how popular the VPN service is, which VPN protocols they support and specific policies they have such as if they offer free or paid plans, accept anonymous payment options or offer port forwarding and adblocking.

In addition to exposing this data through our API we've also added this data to the custom rules within the dashboard allowing you to block specific VPN operators by name or operators that allow anonymous payment options and the blocking of ads. In-fact we've added support for almost everything in the operator API response to be utilised by your custom rules.

We've also made some general improvements to the custom rule feature itself such as when setting a custom output modifier you can now add multiple pieces of information as a nested array. You can also convert singular values to arrays and then add to their contents. You can see how this is done on the right in the below screenshot.

Image description

Additionally we've added categories to the API Provided Values dropdown within the condition section of custom rules as shown on the left in the above screenshot making it easier to find the data you want to use in your rule especially now that we have so many data providers.

Below we've included a screenshot showing one of the new operator cards that appear on our threat pages when using dark mode, there is of course a light version of these cards as-well.

Image description

As you can see above the colors featured on the card match the logo of the operator to give a consistent appearance. You'll find this is present for all the operators we've profiled. For example here are some cards for NordVPN, CyberGhost and WeVPN. Three popular VPN operators with distinctive color use.

To access the new operator data via our API you merely need to be using the latest version of our API which is selectable from the customer dashboard version dropdown. If you're already set to always use the latest version then you will have been upgraded already.

This new API version is dated the 2nd of December 2021 and is manually selectable through the API directly when using the version flag, for example &ver=02-December-2021. This can be useful if you want to try the new API version without changing which version your production requests use.

So that's all the updates we have for you today. We would very much like for you to visit some of the links above so you can see the card designs for yourself and we hope everyone is having a great week.


Supercharging our API with PHP 8.1

Image description

Today we've upgraded our PHP version from 7.3 to 8.1 for our v2 API. This is an upgrade we've wanted to do for quite a while, in-fact we've been running tests against 8.0.x versions of PHP for the past 12 months. It has taken some effort to upgrade our code due to the many changes between PHP versions and specifically the deprecating of old functions and changes to the behaviour of current functions

In addition to those required code updates we had to do a lot of performance testing due to the low latency nature of our API we're incredibly sensitive to code interpreter changes that could introduce performance regressions. And whilst PHP versions 8.0 and 8.1 introduce many great ways to improve performance including AVX instruction use, JIT compilation and improvements to OpCache there can be performance regressions dependant on the methods your code uses.

Often in coding there are a multitude of ways to complete the same goal, even a simple array iterating function can be implemented in many different ways with each having wildly different performance characteristics.

Thus we had to do a lot of performance testing. We've run billions of requests through PHP 8.x since it was released and through this testing we've identified and changed parts of our code where needed to get the best results. This work didn't just happen in the last month but has been an ongoing effort since November 2020 when PHP v8.0 was released.

And so today is the day that our v2 API is finally upgraded to a 8.x PHP branch and specifically v8.1.0 the latest and greatest version of PHP.

With this change we're seeing a steady 25% latency improvement over our previous code which directly translates into being able to handle more requests per second. But remember this isn't just down to us switching PHP versions this improvement also includes all the work we did to bring our code up to the PHP 8 implementation standard. Many of our code changes by themselves have improved performance simply by using newer or faster functions and methods.

One of the pitfalls when doing an upgrade like this is code debt. We now support four different versions of our v2 API and the oldest of these needed more work than our newest to even execute consistently under PHP v8.x. We also had to recompile some of our own libraries that we include within our PHP environment to bring them up to 8.x compatibility.

You may have read in a previous blog post how we had rewritten our caching library. The main reason for this was to bring support for builds of PHP 8.x although we were able to improve performance simultaneously just through the natural iterative design process.

At current only our v2 API is using the newest PHP 8.1 interpreter but we have done testing with the customer dashboard and other parts of our site including administrative backends and it looks promising for a full site rollout over the next few months.

So that's todays update, thank you for reading and we hope everyone is having a great weekend.


Reducing Request Payloads

Image description

When making a request to our API you may be surprised to learn that the majority of the response you receive from us isn't actually JSON data, it's headers.

In-fact an average request to our API results in a 1358 byte response but most of the time our JSON makes up only 448 bytes of that when performing what we call a full lookup containing the most data our API offers. That's a ratio of just 33% JSON to 66% headers.

Which is why we've gone through all the headers we send and removed ones that don't make sense to send in our API responses. Headers that explain encoding, cache times and how you can access the API using a newer HTTP standard will remain as these are important for compatibility and efficiency. But headers for both tracking and debugging our CDN (Content Delivery Network) have been removed.

A lot of these headers are not actually originated by us but are generated automatically by our CDN partner. These extra headers make perfect sense to include with a normal page load where the extra 910 bytes are considered minuscule compared to the almost 1MB size of most web pages today.

But for an API like ours where the JSON is only 448 bytes that extra 910 bytes taken up by headers doesn't make sense. So the end result is today we've managed to reduce those headers from taking up 910 bytes to using 359 bytes. This results in a total average payload size of 807 bytes. This is still sizeable but it's much lower than the previous 1358 bytes and when extrapolated over the billions of requests we handle it really adds up.

The end result of course is you get answers from our API faster as there is literally less data to be transferred. This change has been enabled on our v2 API today and there's nothing for you to activate or change, you should already be benefitting from it as you read this.

This change alongside our activation of HTTP/3 and 0-RTT that we mentioned in our previous blog post are all part of a general efficiency drive we're doing which has included the addition of a new North American server node, operating system upgrades, new webserver deployments and new server-side code interpreters.

One thing we've not mentioned in a blog post until now is we also recently rewrote our server-side database caching system which has resulted in a huge reduction in initialisation time. This is especially important because this piece of code gets initialised on every single API request and so any improvement in startup time has dramatic effects when extrapolated across our entire request load.

We hope this post was interesting, we do enjoy these deeper dives into what we're doing and hope to make these kinds of posts on a more regular schedule.

Thanks for reading and have a wonderful weekend!


Introducing support for HTTP/3 and 0-RTT

Image description

Today we've added support for HTTP/3 and 0-RTT across our entire website including all versions of our API. To explain why briefly, these technologies enable us to decrease access latency for establishing secure connections and the resumption of prior secure connections with dramatic results.

HTTP/3 (with QUIC) Explained

HTTP/3 is built using the QUIC protocol which uses UDP instead of TCP and through its ingenious implementation reduces the back and forth chatter between your client and the server you're requesting content from substantially.

Traditionally HTTP requests are initiated by a client sending a TCP SYN to a server which answers by sending back a TCP SYN + ACK then your client sends another ACK followed by a TLS request and a HTTP request which are replied to with a TLS setup response and a HTTP response.

HTTP/3 does away with most of this. Instead the connection is initiated by a client sending both a QUIC request and a HTTP Request simultaneously and receiving back a QUIC response and a HTTP response simultaneously. This reduces the back-and-forth required by traditional handshakes by an entire round-trip.

0-RTT Explained

While HTTP/3 with QUIC enables very fast initial connection establishment by removing an entire round-trip off the handshake process 0-RTT takes this a step further with connection resumption.

This is where those prior handshakes that were negotiated through HTTP/3 can be reused without renegotiation. And the way this works means your client can send a new request for API data in its very first round-trip to our servers.

Performance Implications

The end result is a much faster TTFB (Time to First Byte) and a fast conclusion to the entire API request followed by much lower latency for subsequent requests.

Latency is very important to us because it can be the determining factor in whether our API is deployed in a specific scenario or not. If our API is too slow for where it's needed we could lose a potential customer because of it.

And while we launch these new features today we're currently in the midst of an infrastructure upgrade that will bring improvements in the future too. This morning we deployed a new web server on one of our nodes (NYX) which offers us both new capabilities and opportunities for performance gains across our site and API. It also affords us tighter integration with our CDN (Content Delivery Network) by better matching their capabilities. To put it simply, there is more yet to come.

If you've been using HTTP/1.1 or HTTP/2 to access our API before now there is no need to worry as you still can. For those who already have or are willing to upgrade their clients to support HTTP/3 with QUIC we welcome you to do so, the latency benefits are too good to ignore especially if you're performing millions of requests to us on a regular basis.

That's all the updates we have for you today, thanks for reading and have a great week!


Back