European Cluster Refresh

Image description

Today we'd like to take a deep dive into our European infrastructure as we feel it'll make for an interesting blog post. If you're not looking for a long and technical analysis it's safe to skip this post as we're not announcing any new or changed features below, we're just detailing hardware today.

Why are we doing a deep dive?

Firstly the reason we're doing this is because we have moved our entire European cluster to brand new hardware. This has been something we've been attempting to do gradually since 2020 by purchasing more powerful hardware and slowly phasing out older machines.

But we reached a point where this strategy wasn't giving us enough of a capacity jump. When you grow a cluster without infinite money you usually choose to grow either high or wide. Meaning add only a few servers but have them be very performative or add many servers and have them be low to medium performing.

Naturally these decisions result in compromises. You need to factor in redundancy against hardware failure (having many nodes), performance (faster CPU's, more memory, faster storage etc) and cost ($$$).

We felt that we could compromise on the amount of nodes to vastly increase the performance of each node, so prior to today we had 6 live nodes and 1 hot-spare node for Europe. We decided to reduce this to 4 live nodes and 1 hot-spare and use the money saved from consolidating to drastically raise performance. Now the new upgrade we're about to reveal is still several times the cost of our previous infrastructure but the performance gains are much higher than the cost increase.

Put simply our cost per request falls dramatically when comparing total request capacity of our new servers vs our old servers. So without speaking more abstractly let's get to the technical details. Let's list what our servers were before and what are we operating now.

Our previous European infrastructure

For Europe our prior servers mostly consisted of Haswell era Core i7 Quad Core and XEON E3 Quad Core processor based systems. Only our newest node (THEA) operated with an 8 Core EPYC processor. Most of these servers were equipped with 32 GB of memory and exclusively used Hard Disk Drives except for THEA which used NVMe based flash storage.

We've created the graphic below to illustrate our prior hardware.

Image description

As you can see the majority of our live infrastructure were quad cores and using hard disk drives. You may be wondering why we choose to own our hardware at all as opposed to using Amazon Web Services, Google Cloud or Microsoft Azure and there are a few good reasons.

Why not Cloud hosting?

Firstly, those cloud services cost a lot of money relative to the market. And while you can scale quickly to support lots of customers you can often be blindsided by sudden increases in costs whether from database transactions, egress fees, compute or storage use etc - We estimated the cost of using these common cloud providers to be several times higher than operating our own equipment.

Secondly, those services do have outages and in multiple instances we've seen worldwide outages of both AWS and Azure. This means if we were to use these cloud providers we would need to use more than one simultaneously which complicates our software development and compounds the cost problem as we need multiple nodes running simultaneously at each cloud provider for redundancy.

Thirdly, performance. It may sound counterintuitive that these mega cloud providers don't offer the best performance when you can scale your application to hundreds or even thousands of servers. But when you're dealing with billions of requests each with a request payload under 1000 bytes the TTFB (Time to First Byte) matters. This problem is mainly due to their servers using either XEON or EPYC server grade microprocessors which feature low single-thread performance by design to allow for very high core counts in an acceptable power envelope.

One of the main reasons that we previously chose to use Core i7 and E3 XEON's is because they all have very high clock speeds and thus single-threaded performance when compared to lower clocked processors of the same architecture. It's not uncommon to receive a 2GHz E5, E7, Silver or Gold server grade XEON CPU when using one of these cloud providers where as the consumer Core i7 and workstation E3 XEON processors are regularly in the 3.5 to 4.1GHz range. This high single thread performance is important to maintain each individual requests low latency as multiple CPU threads do not work together on a single API request in our architecture.

Forth, security. One thing all these cloud providers have in common is the instances they provide are virtualised. And as we've seen over the past several years with Spectre and Meltdown the types of vulnerabilities being found make being on the same server as other individuals risky. There is always the possibility that the virtual machine host becomes compromised and the ability to read the memory of another virtual machine guest can occur.

These types of exploits aren't just theoretical anymore, real attacks like this are occurring every day on unpatched systems and as new vulnerabilities are discovered they can be exploited before mitigations become available. And while newer processors such as AMD's EPYC line now offer fully encrypted virtual machine memory as default with in-CPU hardware based cryptographic stores there's still always the possibility for vulnerabilities that undermine these added layers of security.

This has been a very strong reason for us to use dedicated hardware whenever we can as if we're the only user on the system it fully eliminates the possibility of this issue affecting our infrastructure.

Our new European infrastructure

So what exactly is the new hardware we've chosen for our European cluster? First let's show a graphic and then we'll go into more detail.

Image description

Because the new servers have so many cores we've had to make them a little smaller in the above illustration but rest assured each core here is 1x to 2x higher performing than the cores in our previous machines and as you can see there is 16 of them per server as opposed to 4 or 8 in our previous cluster.

Image description

Based on the CPU benchmarks we've performed these new servers raise performance by 4.42 times what we had before and yes you read that correctly. We would need to duplicate our old infrastructure just under 4.5 times to be the equivalent in CPU performance to our new infrastructure. That would be 24 of our old servers to match 4 of these new ones.

And that is because we're using the AMD Ryzen 9 5950X 16 Core / 32 Thread Zen 3 based microprocessor in all of our new servers. This is the fastest processor AMD sells when it comes to single-threaded performance and the fastest they sell up-to 16 cores in multithreaded performance. It has a base clock of 3.4GHz and a boost clock of 4.9GHz. And in our testing these CPU's stay at a steady 4.7GHz.

Image description

In CPU passmark our previous infrastructure with all servers combined under a multithreaded test scored 41,772 points. Our new infrastructure by comparison scores 184,652 points.

And this processor doesn't just bring the heat when it comes to performance as it also supports upto 128 GB of the fastest ECC memory. Which just so happens to be exactly what we've equipped it with as we're using 3200 MHz ECC 32 GB modules from Samsung which are the fastest JEDEC compliant ECC modules available.

As if that wasn't enough this processor also supports PCIe 4.0 which means we were able to equip each server with two 3.84TB PCIe 4.0 NVMe Enterprise drives from Samsung with a rated sequential transfer speed of 7,000MB/s and over a million IOPS. And yes we do have two of these in every server.

Image description

One of the things we've tried to do with our previous infrastructure is not allow for our hard drives to hold us back. In that endeavour we developed a tiered caching system for database reads and writes which allowed the API to run consistently fast even while serving a huge volume of unique data for every request. And while we will continue to use this system even on our new servers we now have raised the base level of disk performance by 7,000% when it comes to sequential access and 34,000% for IOPS.

Put simply this performance gain will have a dramatic effect for smoothing data access which will result in a more consistent experience for our customers accessing the API.

The numbers when you combine all our new hardware together is mind boggling. 64 Cores, 128 Threads, Half a terabyte of memory and 32TB's of the fastest NVMe based flash storage you can get. And one quick note about the storage, with this change to flash based storage for our European servers we have now eliminated all hard disk drives across all our infrastructure which includes our North American servers which have been using Flash based storage since their original deployments.

Why make this move now?

So you've seen the hardware but you may be asking why did we choose to do this now and not earlier. Well a few things aligned to drive this decision.

Firstly we've been wanting to move off our old hardware for a while due to the increase in demand for our services. We estimated that to keep up with demand in only Europe we would need to bring online a new server every 3 to 4 months. One of the things people may not consider when adding servers to a cluster is the more servers you have the smaller the impact adding one extra server has.

For instance moving from 6 servers to 7 servers only redistributes the load between them from 16.66% to 14.28%. The question at that point is will you really feel that 2.38% redistribution on each of your servers? in our experience, not really. But if you're moving from 2 to 3 servers or 4 to 5 the difference is much greater.

So in this situation we decided to grow our infrastructure higher instead of wider by reducing the number of nodes in the cluster from 6 to 4 while making each individual node as powerful as our entire previous cluster. And when we do eventually add a 5th node it will have a larger impact. We think staying around 6 nodes per region maximum is a good standard for us at the moment and as newer and faster hardware becomes available (32 core CPU's with high frequencies for instance) we may just upgrade to those as opposed to adding more servers to the cluster.

Secondly we've seen the DDoS attacks on our service increase in both severity and frequency. And although we use an Anti-DDoS service (CloudFlare). Their ability to scrub all of this attack traffic is limited because we're operating an API and not a normal website that they can just cache and re-serve to our legitimate visitors. This really necessitated us scaling up to be able to withstand these large attacks we've been receiving.

Thirdly the price of maintaining our old infrastructure was starting to become uncomfortable relative to their performance. Right now Europe is suffering through an energy crisis and our older servers offered a very low performance per watt metric when compared with newer hardware. Put simply for every watt of energy they consumed we received around 0.23 to 0.25 units of performance relative to our new hardware which delivers 1 unit of performance per watt, just over a 4 fold increase.

Forth the hardware available in the market caught up to what we wanted. When moving infrastructure like this it's a big job. There's a lot to consider like what hardware to choose, comparing benchmarks, features and upfront cost aswell as long term costs. In addition to that just deploying and setting up all of these new servers in a controlled manner with zero downtime takes considerable time and effort.

So when we decided to change servers we didn't want to do it for just 0.5-1.5x performance gains. That isn't enough of a jump to warrant all that time and effort. But a much larger 4.42x jump? well that's substantial enough to make it worthwhile. When choosing the Ryzen 9 5950X we also considered the R5 3600, R7 3700X, R9 3900, EPYC 7502P and even the Intel Core i7 8700, Core i9 9900K and Core i9 12900K.

Ultimately we decided on the Ryzen 9 5950X out of the AMD processors because it's built on their latest Zen 3 architecture which delivers incredible single thread performance while still boasting 16 cores and 32 threads. When it came to Intels offerings only the 12900K could rival the 5950X in single threaded performance but its microarchitecture with big.LITTLE core structure was unappealing to us and does let it down in multithreaded workloads. It also doesn't support ECC memory.

It took the market quite a while to deliver processors at the level we just discussed. From about 2007 to 2017 quad core processors reigned supreme in the mainstream of the market and most affordable server hosts were only deploying those which is why we ended up with so many of them in our infrastructure. Since 2017 though we've seen a steady increase in core counts with AMD offering 16 cores on their mainstream desktop platform and 128 cores on their dual-socket server platform.

As we mentioned a couple of times above it's great having many cores but single threaded performance still matters which is why we continue to use these more consumer orientated microprocessors which offer much higher frequencies than their server equivalents. It just so happens with the 5950X we didn't need to sacrifice desirable features common to servers such as high core counts (16 Cores) fast I/O (PCIe 4.0) large quantities of RAM (128GB) and ECC (error correcting) memory support.

The last thing we wanted to discuss is redundancy. As we mentioned before when moving from 6 to 4 servers for Europe we considered the redundancy and decided it was a worthwhile compromise. Part of that rationale is because we're not placing all four servers in one datacenter. They have been placed in three geographically separated datacenters across multiple countries.


So that's our infrastructure deep dive for Europe. This is all part of a wider infrastructure plan as we still seek to find good hosting opportunities in Asia. We've tested a few servers last year in Asia and while none of them were able to deliver to our high standards we continue to look and be optimistic we will find something within our budget that has the performance and reliability we require.

Thanks for reading, we hope this was interesting and have a wonderful week.

Subscription Plan Price Increases

Image description

Today we have increased the prices of two business plans and all our enterprise plans. Before we get into the new pricing it's important to make clear these prices only apply to newly started plans and plan alterations. This means they do not apply to you if you're already subscribed to an affected plan, you will continue to pay the previous lower price until you cancel your subscription or upgrade/downgrade your subscription.

So let's first show the old and new prices and then we'll explain why we've done this.

Image description

As you can see our Enterprise plans have doubled in price and we've also increased the pricing on our two highest Business plans by $5 and $10 per month. There are a few reasons we've made these changes which we'll now go over.

Firstly we're still much more affordable than our competition even after the pricing changes. In-fact our highest priced enterprise plan is still lower than our competitors lowest business plans while offering hundreds of thousands more queries per day and millions more queries per month. To put it simply, we were priced too low compared to the market.

Secondly by charging our largest customers more when they are the ones most likely to be in a position to afford it we can upgrade our hardware and expand our infrastructure to allow for more customers in the future. These large customers also create the biggest load on the service which leads into our next point.

Thirdly we wanted to lock our per-query price point. Once you reach 2.56 Million daily queries it doesn't make sense to receive a discount for using even more resources. At that point you need what you need and having a slightly lower price per query isn't a viable upsell proposition and if you are making such a large purchase of daily queries we've found you most likely will use most of your allowance which means more incurred cost for us.

So you may notice the $49.99, $99.99, $149.99 and $199.99 price points all correspond to the same cost per query. Meaning you don't receive a lowered cost per query for switching between these plans you simply trade a linear amount of queries, e.g. 2x more money for 2x more queries. This makes these larger plans more sustainable and helps us pay for the infrastructure commensurate to the burden these plans put on our service.

Like we mentioned at the start these prices only come into play for people starting a new plan or changing plans. We've always done it this way, the price you pay at the moment you subscribe will always be the same when it comes time to renew. If you're on a pre-paid plan for instance paying by PayPal we will of course honour the pricing you paid for your most recent payment when it comes time to renew.

We're also prepared to change your plan to one of the ones listed above for you manually until February 25th 2022 meaning you can upgrade to one of our plans that have changed price and only pay the previous lower prices and not the new prices if you're currently subscribed to a plan and contact our support before Feb 25th 2022.

In addition to the price changes we've also increased the amount of custom rules the enterprise plans can have enabled at one time to be more in line with our plan-to-plan increases in custom rule allowances, these are also displayed in the above screenshot.

We know when it comes to price increases it's always disappointing but we think we've reached a fair balance where those who use the most are covering the lions share of their costs to us and the increases don't apply to anyone currently subscribed to the service who have helped to build our service into what it is today through their patronage.

Thank you for reading and we hope you have a wonderful week.

New Custom Rule Feature: Continued Execution

Image description

Since introducing the Custom Rules feature we've made many improvements but there has been one aspect that hasn't changed, when the conditions of a rule are met it runs and then no other rules positioned below it are allowed to run even if their conditions would also be met.

We did this originally for one main reason, it made rules simpler for customers to understand, specifically they knew when a rule ran that the other rules would not run making it easier to visualise the cause and effect of their created rules.

As time has moved on though this reason make less and less sense. Firstly custom rules has grown substantially in its available feature set. We first introduced optional condition groups and then the rule library and most recently managed rules. In addition to that we've vastly expanded the available data providers, comparison types and output modifiers that can be used.

All of those changes were made to increase the utility of rules and as a result the complexity has naturally increased too. But we think by enabling the ability to have multiple rules run one after another we can reduce some of that complexity because we know some customers are having to make extremely detailed and complicated rules due to only a single rule being able to run per query.

Image description

And so that is why when a rule is expanded you'll now see this new toggle as shown above which when enabled will allow your rules to continue to be processed even if a specific rule is triggered.

By default this toggle will be off for all newly created rules and past saved rules as we think most of the time you'll only want one rule to run but now when you do need more than one rule to run you can easily change that behaviour. This toggle will be available on all types of rules including managed ones where you'll decide whether to have rule processing continue or not.

So that's the update for today, it sure has been a week full of changes hasn't it! - And we still have more to come at the start of next month so make sure to check back for that!

Introducing Easy Plan Alterations

Image description

Today we've introduced a new feature to the customer dashboard that we know is going to be very popular. The ability for you to alter your paid plan at any time for both upgrading and downgrading with prorated pricing.

This means as your needs grow you can easily increase your plan size yourself while only paying the difference between your current plan and new plan or if your needs go down you can downgrade your plan and receive back a monetary credit which will be used against future invoices.

Prior to today you had to contact our support to have plan alterations performed for you and we know this was suboptimal, not only did it add friction for customers wanting to alter their plans but it increased our support burden.

We did in-fact start to implement this feature during the height of the COVID pandemic where we saw very high plan increase requests due to more people working and spending time at home on the internet resulting in higher query usage by the websites and services utilising our API. And although other features began to take precedence we're very happy to finally bring this feature to fruition.

Image description

And we certainly think it was well worth the wait to implement it properly, as can be seen in the screenshot above it's very easy to change plans. We know that many companies make it easy to start a plan but they don't always make it easy to downgrade or cancel. The dreaded "contact our support" to facilitate a downgrade or cancellation often leads to an annoying sales pitch.

We fully reject this way of doing things which is why we've added not just the ability to upgrade your plan but also downgrade with credit being applied to your account balance that will be used automatically for future invoices.

In addition to upgrading and downgrading plans you can also switch from monthly to yearly or yearly to monthly plans. So if you've been a customer for a long time paying monthly you can now easily switch to paying yearly for our 8.33% discount.

One last thing to mention, if you do happen to have a balance credit this will now be shown in your dashboard along the top information bar so you can keep track of any funds you have to be used for future payments.

Thanks for reading and we hope everyone is having a great week!

New European Server Node Introduced

Image description

Today we've introduced a new node to our cluster called THEA which increases our European capacity by 17% in request terms but actually increases our load capacity by about 30% for that region in processing capacity.

This is running the same hardware configuration as our LETO node we introduced in November last year for North America which means it's running one of the latest EPYC processors from AMD which features a very high thread count and Instructions Per Clock (IPC).

We've done this for three main reasons.

Firstly request load has been steadily increasing over the past few months which means we needed to expand our footprint to support new customers. We did that already for North America late last year with LETO and now in Europe with THEA.

Secondly we have been planning to migrate our server nodes to higher specification servers anyway. This new baseline includes using the latest generation of high core count processors from AMD and PCIe based NVMe flash storage. Both LETO and THEA meet these new criterias.

Thirdly as you may have read about yesterday the attacks against us are increasing in frequency and severity. This reality means it's important we have extra capacity beyond what is merely required to run our service.

And while we do heavily rely on our CDN partner CloudFlare to scrub attack traffic for us having our own infrastructure be able to withstand some of the attack traffic we're receiving still plays an important role. They can't always react fast enough or immediately decipher which of our traffic is legitimate and which is malicious due to our service being an API that is accessed by headless servers for the vast majority of requests.

So that is our announcement for today. We will be announcing a new feature early next month so make sure to check back for that. Until then, thanks for reading and have a wonderful week.

Major service disruption

Image description

Today between 12:25 PM and 1:15 PM GMT we suffered a major outage. At its peak just over half of all traffic sent to our servers did not receive any kind of response.

This was due to a very large attack on our infrastructure that didn't trigger our anti-DDoS protection due to the way the attack originated from a very large number of source addresses and created traffic similar to our legitimate customers. In addition to this one of our server nodes was offline before the attack began due to an unrelated fault which removed 25% of our North American cluster capacity.

The attack came to an end when we were able to mitigate the attack manually by engaging certain controls at our CDN partner and that immediately brought service back into normal operation.

With attacks against the service becoming more frequent we will be spending even more time looking at our mitigation strategy, today we were slow to react to this attack because our automatic system didn't engage and when trying to deal with it manually we found it difficult to pinpoint which addresses were launching the attack amongst our normal traffic.

Although we saw our traffic was several times greater than normal we couldn't identify quickly which addresses were part of the attack and which was legitimate customer traffic due to the attack purposefully mimicking legitimate requests to our service.

If we have anything more to share about this attack we will update this post. Until then, we are very sorry this occurred and we will strive to do better.

Introducing Managed Rules

Image description

Almost one year ago on January 20th 2021 we introduced a new feature called the custom rule library. This was a new menu within the customer dashboard that contained pre-made rule templates that you could import into your account and then tailor to your specific needs.

This has been a very well received feature that lead to a huge increase in the usage of custom rules by our customers throughout last year.

But there has been something lacking. What happens when you import a rule from the library that needs to change over time. For example lets say you want to create a rule that applies to a specific companies network but over time their network will expand and the rule may no longer encompass their entire infrastructure.

That is where managed rules come in. We still have the same rule templates as before which we're now calling self-managed rules but in addition to that we've added what we call managed rules.

Image description

As the above screenshot shows the rule library now has new buttons in the top for displaying self-managed or managed rules. We want to be clear we're still committed to self-managed rules and in-fact every managed rule we create going forward will have a self-managed version available for easy templating.

To explain how it works, when you add a managed rule to your account we will control the conditions that trigger that rule for you. Whenever we update that rule in the library your saved version will also be updated automatically. You still get to name the rule and alter the rule outputs as we only manage the conditions for the rule. Below is a screenshot showing how an expanded managed rule looks in the dashboard.

Image description

We've made some of the global control buttons for managed rules blue so they're easier to notice within the dashboard interface while self-managed rules will continue to use pink global control buttons.

If we ever remove a managed rule from the library that you have saved then the rule you have saved will automatically transition to become a self-managed rule that you can fully modify. At present we haven't had any need to remove a rule from the library but it may happen in the future for instance if a company ceases to exist a rule targeting that company may no longer be needed.

Like all our rules you can import and export these easily and you can even export a managed rule, edit it in your favourite text editor and import it back to your account as a managed or self-managed rule instead.

Managed rules are available immediately for all accounts using our v2 API versions dated 2021 or newer.

Thanks for reading and happy new year!

Our 2021 Retrospective

Image description

At the end of each year we like to look back and discuss some of the significant things that happened with our service including milestones, new features and improvements.

So before we get to this year lets start with evaluating a major feature we introduced at the tail end of last year called burst tokens. Since we introduced this feature we've had many customers tell us it has provided them with the confidence to make a purchase by quelling their usage anxiety. Before this feature customers had to guess what they thought their usage would be in the future and many found this difficult.

Image description

The cushion provided by burst tokens has helped to alleviate this decision burden and as a result we've seen an increase in free customers converting to paid plans.

In late December last year we also introduced a major update to our Cross-Origin Resource Sharing (CORS) feature enabling the use of wildcard sub-domains and the addition of an API endpoint to alter your domains in an automated way.

Image description

These changes helped greatly expand the usage of CORS by 191% compared to 2020 and many customers told us the wildcard support was the main reason they began using this feature due to the speed in which they could now deploy it across their entire domain with a single entry.

Last year saw us finally launch North American nodes in December 2020 and we added two more this year in January 2021 and November 2021. We've seen our US based traffic steadily increase throughout the year thus the introduction of several new servers.

Traffic in general doubled this year over 2020 with the majority of that originating in North America. Our infrastructure has held up great and we are in the process of swapping out older nodes for newer hardware. The newest server we introduced in November 2021 (LETO) is now our most powerful server node and has become the new baseline for what we procure going forward.

In addition to growing our physical infrastructure we also made many investments in our virtual infrastructure. We've been able to handle the influx of customers and their traffic without incident while maintaining fast database coherency within our cluster. We use a custom database and cluster architecture in part to maximise our hardware and offer the most affordable pricing by not relying on expensive commercial solutions.

At the beginning of the year we introduced a new change log interface featuring color coded categories for the different kinds of changes we make.

Image description

This has been a joy for us to use as we take great satisfaction in detailing the work we do to make our service better for you.

One big change we did to the Dashboard this year has been the automatic refreshing of the positive detection log. We saw based on our analytics that many customers liked to leave the dashboard open for long periods of time so we looked at ways to improve this through live updating. We followed this up recently with a realtime QPS (Queries Per Second) display which has been well received.

A huge feature we introduced in 2019 was Custom Rules. This is the feature that enables customers to fully tailor how our API responds to their queries. And at the very beginning of this year we built upon this feature with a Custom Rule Library which currently contains 26 pre-made rules which you can import to your account and then edit. Since introducing this library we've seen rule use by accounts increase by 267% when compared to 2020.

Image description

We found rules to be so empowering for our customers that we wanted to enhance it further and so we added the ability to import and export individual rules and we increased the quantity of rules that our customers can have enabled at any one moment. Various UI improvements were made to make rules easier to create and manage and we also introduced new condition types and API provided value options.

This year we expanded the control you have over your account by enabling you to delete your account using a button within the Dashboard. Prior to this you needed to contact support to have your account and all associated data removed from our service which we felt was unduly burdensome.

Image description

We believe it should always be as easy to leave a service and take your data with you as it was to signup in the first place. In addition to these manual account controls we also introduced automatic account deleting for when it's clear an account hasn't been used and there's no good reason for us to keep your data any longer.

When it comes to the website we mostly do pruning and tweaking. But in August this year we made a dramatic change in the introduction of a dark mode. You may even be using it right now to read this very post! - This feature took a lot of time to get right but we're very happy with it.

In addition to the dark mode we followed this up with an overall design overhaul we call Glass which introduced our topological map background to all our webpages and changed our Raleway font to normalise number heights.

When it comes to the API and general technical advances. We transitioned our v1 API to become a v2 API proxy. We introduced support for HTTP/3 with QUIC, we updated our API backend from PHP 7.x to 8.1.x and we significantly reduced our payload sizes through header pruning.

In addition to all of those changes we introduced two new dated versions of the API that provided more data in our API results like organisation names and operator data. And speaking of operator data..

Image description

Operator data which includes detailed data cards like the screenshot above were one of the major features we introduced this year. We now have more than 50 VPN providers profiled in this manner with more being added weekly. This has been a huge boost for our customers who rely on accurate VPN data, just being able to specifically say which VPN service an IP is being operated by is extremely conclusive and thus drives decision making confidence.

And so that has been our 2021 highlight reel filled with lots of growth, new features and improvements. We did leave out some things for brevity like our PHP client library gaining CORS and multi-IP checking support, some of the specific UI enhancements we did across the site, small-screen device usability improvements and UTF-8 support on the API among others but the things we felt were most important got a blog post and a full mention above.

Traditionally we have made any pricing changes to the service in January. We didn't do that in 2021 but we may make some pricing changes in 2022. However any changes we make that increase pricing won't apply to any current plan you're subscribed to as the price you began your plan at is the price you pay until you change plans.

In closing we hope everyone had a great year like we did. Thanks for reading and happy holidays!

Keeping your account safe

Image description

In todays post we want to share with you some tips that will keep your proxycheck account secure. We're doing this because we're seeing an uptick in accounts being taken over by malicious actors and in the volume of accounts we're having to disable for breaking our terms of service.

So without more preamble let’s get into it!

Keeping your API Key secret

This is the first line of defence to keeping your account from being breached. We issue a 24 character long API key to every account where each character has 36 different possibilities which results in 22 undecillion key permutations. This makes our keys practically impossible to brute force.

But this built in security by way of the key length and complexity means nothing if your key is not kept secret. The number one way our accounts get compromised is due to the key being leaked, usually in source code through publicly accessible code repositories and key misuse for instance trying to use your private key in public facing code.

If you work on an open source project that integrates our service you should always make sure the key is being included from a file that isn't included in your main project or loaded from a database so it won't be inadvertently shared within your code repo.

And remember when making queries to our API you can use TLS encryption, since all server-side requests must include your key this is the best way to secure your key from MITM attacks.

Secure your account with a password

When you signup you'll be emailed a link to login to your account and one of the first things you should do is create an account password. This will still allow you to login using your API key but it will require the password in addition to the key. Setting up a password also enables logging in using your email address.

And of course don't reuse a password you use somewhere else because that will open you up to credential reuse attacks should we or another site you log into be compromised. We strongly recommend the use of a password manager which can generate randomised passwords for each website you signup for.

Enable two-factor authentication

In addition to setting a password you can enable two-factor authentication which is essentially an extra password you enter when logging in that changes every 30 seconds making it difficult for an attacker to obtain.

You can use web based two-factor authenticators to generate these passwords but we strongly recommend using a separate physical device like a smartphone to generate them. Many password managers also include two-factor capability and we fully support the industry standard method for these called TOTP (Time-based One-Time Password).

We have chosen not to offer SMS based two-factor support because it's not secure enough. This method of two-factor is vulnerable to social engineering of the phone network staff who may issue an attacker a sim card with your number on it allowing for the attacker to intercept your two-factor codes.

To encourage the setting of a password and the enablement of two-factor authentication we offer customers two extra custom rules in addition to their plans provided rules.

Pay attention to email alerts

Many actions within the Dashboard cause email alerts to be sent and you should pay attention to these, they may give you an early warning that someone other than yourself has gained access to your account.

We send alerts for the following reasons:

  • You've logged into the dashboard from a new IP Address
  • You've set or changed your account password
  • You've enabled or disabled two-factor authentication
  • You've changed your email address
  • You've signed up for or cancelled a paid plan
  • You've generated a new API Key

And of course make sure our emails are reaching your inbox and not being caught in your spam filter.

Keep your email address upto date

If we need to email you for any reason and we're unable to do so your dashboard will show a notification at the top warning you of this. It's very important you then update your email address because we may disable your account if we're unable to contact you.

While it's rare we disable accounts for this reason there have been occasions where we have had to disable accounts to get a customers attention about an important issue with their usage of our service.

Don't use temporary email services

Like we stated above, it's very important we're able to contact you for specific account related issues. Due to temporary email services being as their name implies temporary it means we can't contact you after you signup for our service. It is for this reason we have an item in our terms of service regarding the use of temporary email addresses.

If you're found to be using a temporary email even a long time after you initially signed up the account will be disabled. You will then have one year to contact our support to have the account re-enabled so that you can change the email address. If you don't contact us within that year the account and all associated data is automatically erased.

Something else to note about temporary email services, many of them don't have any kind of account system and the inboxes of their temporary addresses are accessible in a public feed. This can put your account at risk of takeover.

Don't create more than one free account

This is the number one reason that customers lose access to their accounts. And in-fact just this year alone we've had to disable thousands of accounts. We offer a very generous free tier where every feature is available in full and only the quantity of queries, custom rules and the burst tokens you receive are dictated by if you pay and by how much you pay.

But still we face free account abuse. Some users even create hundreds to thousands of free accounts to avoid paying for service and this is something we cannot and do not tolerate. If you're found having more than one free account you're risking all of the accounts you have, they could all be disabled at a moments notice and at any time in the future.

Our stipulations here are very simple, you can have as many paid accounts as you want but the moment you create multiple free accounts you're in breach of our terms of service.

And while we have allowed some customers with multiple free accounts to keep them this is usually because their cumulated queries across all their free accounts remains below 1,000 per day or they've contacted us first to ask permission and provided a reasonable circumstance which we accepted.

But in general we do not allow multiple free accounts and you should always follow our terms of service.

Don't commit financial fraud

Although this is rare we do sometimes face financial fraud where someone purchases service using stolen payment information or they use legitimate payment information and later issue a chargeback through their bank.

Both of these issues cost us and all merchants a lot of money. Many people aren't aware of how financial crime impacts the costs of goods and services that they buy but it does, we have to factor in the cost of payment insurance and the accumulating losses due to chargeback fees and employee time spent collecting and submitting evidence for financial crime investigations.

We take a very hard stance on this, if we suspect you're using fraudulent payment information or you issue a chargeback with your bank we will refund the subscription and disable your account. There are no exceptions to this.

So that's our full guide to keeping your account safe and secure. If you ever need help with your account please don't hesitate to contact our support. Even if your account is disabled you have a full year to contact us to recover any of your data or rectify the circumstance that lead to your account being disabled. Stay safe out there and have a wonderful week.

Realtime QPS Display

Image description

Today we've added a new feature to the dashboard which displays your queries per second in real-time. This is something we've wanted to add for a while but it wasn't until we recently rewrote our per-node caching system that it was possible to deliver on this feature.

And that's because with the billions of requests we handle trying to sample incoming queries can itself affect your maximum requests per second. Thus having a high performance read-through cache that can provide valid snapshots of rapidly changing data without slowing or denying changes to that same data was paramount to making this feature happen.

Image description

Above is a little gif we made showcasing the new QPS graph found at the top right of the stats tab within the dashboard and available to all customers with an account.

We think we've been able to create a beautiful and unobtrusive live display of your queries but if you do find it distracting you can click on the display to pause it. Like our other play/pause mechanics your choice will save to your browser and be maintained across visits.

So that's what we have for you today. We are working at the moment on a lot of backend changes but as you can see many of those do result in frontend improvements. This real-time display of your queries would not have been possible without the work we did on our caching system which itself was initiated as a result of our move to PHP 8.x that required various things to be rewritten.

We hope you really like the new display and as always thanks for reading and have a great week.