Introducing a new way to pay

Image description

Today we're introducing a highly requested way to pay for a yearly plan, cryptocurrency. And we're not just accepting a single type of coin but over fifty of the most popular crypto coins available. Below are just twelve of the most common ones we support.

Image description

You may be asking why are we now (finally!) offering this way to pay. Put simply we've spent a lot of time thinking it through. Accepting cryptocurrency isn't as straightforward as the other payment options available and we wanted to take our time so we fully understood the nuances of receiving and holding cryptocurrency.

We know it has taken us a while to get here but we think all the people who wrote to us over the past several years about accepting cryptocurrency will be happy with the way we've decided to move forward with it.

And that is because we're not just accepting one popular coin, we didn't want to pay lip service to the idea of accepting crypto. And so we're supporting a large selection of different coins. The ones we noted above are just the most significant coins by market cap and so represent the most common coins people are holding but we will be supporting a vast variety of coins outside of these, simply ask us if the coin you want to use is accepted.

And like with our Bank Card, Apple Pay, Google Pay and PayPal payment options you're afforded the same level of service when paying with cryptocurrency which means you can upgrade or downgrade your plan mid-way through a plans lifecycle and you're entitled to the same 30-day full refund policy.

If you would like to pay via cryptocurrency please check the Paid Options tab of your dashboard here, where you can find all the details, we will usually fulfil plans purchased via PayPal or Crypto within an hour during business hours.

Thanks for reading and have a wonderful week!


Time Zones added to the API output and Custom Rules

Image description

Today we're announcing a new piece of data we're exposing through our API called time zones, this is as the name suggests a way for you to view the time zone of an IP address you're checking with our API.

We've added time zones because we see it providing a lot of benefits to our customers who want to tailor services to users within specific geographic areas without targeting their country or region.

Knowing a user's local time zone can be hugely beneficial to businesses, just like in our own everyday lives where we plan our day around specific times, so too do businesses. Until now customers have been reliant on the user's browser to show local times but this isn't always an available metric. Our API is not only used on websites where the browser can provide that information.

In addition to exposing the time zones in our API, we've also added it to the Custom Rule feature which means you can create conditions and condition groups which utilise time zones for targeting. We've chosen to follow the iana.org time zone format which means you can easily parse the time zones we provide using standard and common libraries within all major programming languages.

We've updated our API documentation including the test console to showcase the new data and we hope you'll experiment with it!

This feature began as a request from one of our customers which we felt had broad appeal and great synergy with the other data we already make available. If you have a great idea please get in touch via our contact page, we love to hear from you.

Thank you for reading and have a wonderful week!


New unified changelog interface and other news

Today we've added a new changelog page to the website which allows you to quickly view the changelogs from any of our pages in a single unified interface. The main benefit to this is you no longer need to visit individual pages to read about changes and it's easier to read them in a full screen layout as illustrated below.

Image description

Like many of our features this has been on our roadmap for a while, we are very proud of our transparency when it comes to feature changes, additions and fixes. We believe strongly in documenting when things change especially when it comes to our pricing page, privacy policy, terms of service and GDPR compliance which are pages you don't traditionally expect to find changelogs.

We hope you'll enjoy the new page, it does look very colourful and it may even be interesting to read how some features like the Dashboard have evolved since 2018 when we first launched it and began logging the various upgrades and changes made to it over the years.

Looking past this new feature we did want to discuss some other things. Since June we have been performing a lot of backend changes to keep the service strong for the future. Mostly based around database throughput and synchronisation within our cluster of systems.

We've also been improving the sites security against client side attacks that could be leveraged against our customers, specifically XSS attacks. Forms, buttons and other basic functionality has been hardened to make it more difficult for potential attackers to hijack your login session with our dashboard through malicious links.

In addition to that we've for the first time implemented a Dashboard API request limit. Dependant on what you're accessing and how frequently you're accessing it you may be subjected to a 1 to 2 request per second limit for a brief time period.

Just to reiterate, this is for the Dashboard API where you manipulate your account settings programmably, view statistics and download logs. It does not apply to the proxycheck v2 API that you use to check addresses with which has its own per-second request limits published within our API documentation page.

We've added these Dashboard API request limits to help secure our infrastructure against the possibility of resource depletion attacks. We don't believe these introduced limits will impact any of our legitimate customers.

The final thing we wanted to discuss was inflation, the cost of living increases and the prices of energy. We like most businesses are subject to market forces and that does mean our costs are increasing, we're seeing significant price increases in server hosting due to European energy prices but we want to reassure all our customers, you need not worry as we do not raise prices on currently subscribed customers, period.

In addition to that we're not planning to increase the prices of our plans for new customers at this time, we're instead going to absorb the increase in operating costs so don't feel pressured to signup or upgrade your paid plan as no price increases will occur for the remainder of this year.

We should emphasise that traditionally we've performed pricing changes in January and that's still the case. And again any pricing changes would only apply to newly started plans or changes from one plan to another. Those already subscribed to a plan would not be subject to any price increases.

We know we've been quiet all throughout July without a blog post but rest assured we are working on things behind the scenes. We've been bringing on more data partners, increasing the robustness of our backend software and improving security.

Thanks for reading and have a wonderful weekend!


Regarding Downtime Today

This is just a quick post as the situation is still developing.

Around 07:30 AM GMT the CloudFlare network began to fault worldwide and our site aswell as all others on the CloudFlare CDN became inaccessible to the internet. We apologise for this and unfortuantely we were unable to mitigate the problem due to our heavy reliance on CloudFlare for DDoS protection and Load Balancing.

Around 08:12 AM GMT service began to be restored and CloudFlare is continuing to update their status page which you can view here: https://www.cloudflarestatus.com/incidents/xvs51y9qs9dj

When CloudFlare shares their post-mortem we will update this post with a link to that. Once again we deeply apologise for the unexpected downtime, it's not something we take lightly and we appreciate your patience and understanding.

UPDATE 1:

Now that we're able to see full analytics for the time period in question we can see that only around 50% of our traffic was affected by the CloudFlare downtime, the other half continued to be answered normally. This appears to be due to the CloudFlare issues only affecting specific regions and points of presense for their network. We are continuing to evaluate things but as of this update everything is fully operational.

UPDATE 2:

CloudFlare has now published their post-mortem report: https://blog.cloudflare.com/cloudflare-outage-on-june-21-2022/


Dark Patterns: A topic for our 200th blog post

Image description

Today we wanted to do a special entry for our 200th blog post and it's all about dark patterns in user interfaces, what they are, how you can spot them and how we reject their use in our product.

So first lets describe what dark patterns are, in short it's a way to describe a deceptive user interface which has been designed in a way to trick users into doing things they don't want to do or to make it unreasonably difficult for them to access or perform actions that they want to do.

Some examples of this would be an e-commerce website advertising a sale on an item when it hasn't been discounted from its usual price or removing a discount once an item is added to your shopping cart. Another example would be advertising free shipping but then when you checkout there is in-fact shipping or another service charge that would equal the shipping cost.

A common example of dark pattern use in paid subscriptions are when a service makes it easy to signup and pay but difficult to cancel a plan or delete your account.

There are countless examples of dark patterns all over the web and thankfully the law isn't blind to these practices and some states like California in the USA have passed stronger consumer protection laws which encompass deceptive user interface design.

But there is always more that can be done. Here at proxycheck we have rejected dark patterns since the start and we would like to list a few of the ways we've accomplished that.


When you signup we only ask for an email and we do not sell or provide your address to any third parties besides our mail carrier. We don't abuse your email with spam and by default our marketing emails are opt-in instead of opt-out, meaning they're off by default.

When you don't need to use the service anymore you can quickly and easily access an account deletion button within your dashboard. It's always fully visible in the top right corner of the settings tab which is the default tab.

You can export all of your data at any time through the dashboard, no need to contact any customer service people to get a copy of your data.

When you want to upgrade or downgrade your plan you can do it yourself from within the dashboard and all changes are prorated to save you money.

When you want to cancel a plan you can do that from within the dashboard with two clicks from the paid options tab.

Clear pricing that we stick to, you are never charged more than you agreed to and we don't increase prices for subscribed customers even if the plan they're paying for increases in price while subscribed.

Clear and descriptive change-logs available at the bottom right of every page which detail all the changes made to that page, nothing deceptive or hidden here (also we encourage archive.org to scrape our pages to provide an independent log of our changes).

Easy access to refunds even mid way through a subscriptions life cycle with 30-day full refunds since your latest payment, not your account signup date.

Email alerts a week before we're going to bill you which are turned on by default and cannot be disabled for yearly subscribers so they won't get caught paying for something they're no longer using. (And remember you can request a refund if this happens to you)


For us offering these features was not about following a law, we've done most of this before the term dark patterns was even coined or legislated against. For us this is about delivering fairness. We do not look at our customers as an exploitable resource, we see them as valued partners.

This is why we've continued to deliver innovative new features like CORS support, Custom Rules, Burst Tokens, Custom Lists and more at no extra cost. It's why we've expanded the amount of data we provide and why we continue to add features customers need like upgrading and downgrading their plans, deleting their accounts when they no longer need the service and paying their bills outside of a recurring subscription.

At the start we mentioned this is our 200th blog post and that's true. We started this blog on June 20th 2017 and since then we've written about a huge number of topics, service changes and feature additions, we look forward to doubling that number in the coming years and we hope you'll be along for that ride.

Thanks for reading and have a wonderful week.


Introducing Custom Lists

Image description

Today we've launched a brand new feature we're calling Custom Lists which allows you to create customised lists of addresses, address ranges, autonomous system numbers (ASN's) and email domains for use in Blacklists, Whitelists or Custom Rules.

Image description

This new feature replaces our previous Whitelist and Blacklist tabs within the Dashboard with a new unified Custom Lists tab. And as you can see above it looks similar to our current Custom Rules tab but once expanded these lists offer an entirely different set of controls as shown below.

Image description

In the above screenshot we're showing the default Manual Editing Mode which lets you add information to this list just as you could with the previous Whitelist and Blacklist features. You can also manipulate what is contained within this list by the Dashboard API. The previous API still works but we've also made a new one which offers control of any kind of list, not just White and Blacklists.

A brand new feature however is the ability to have your lists automatically downloaded from your own website on a regular schedule. When selecting the Automated Mode the list changes in appearance and allows you to specify a URL and the frequency at which you would like your list downloaded. Below is a screenshot of that interface.

Image description

In addition to still offering the whitelist and blacklist functionality as before you can now also create named lists for any eventuality which can be leveraged solely by your Custom Rules allowing for additional levels of customisability.

We've also added toggle buttons to each list allowing you to easily turn them off and on without needing to erase their contents and as we've followed our Custom Rules design guidelines you can now move the lists around and easily export them for local backup.

To integrate this feature with Custom Rules we came up with a clever solution, when entering custom values into a rule condition you'll now be suggested your Custom Lists by name which you can then click to add into the box and those lists will then be consulted by that rule when its conditions call for it.

The last thing to discuss is how much does it cost, how many lists can you create and what is this 4MB size limit per list?

Well to answer the first question, this feature does not cost anything extra. You can create as many lists as you like but you may only enable a certain number of lists dependant on your plan tier. Free users can enable 3 lists while paid plans can enable between 5 and 60 lists with custom plans able to go beyond that 60 limit.

As for the 4MB size limit we've found in testing it is possible to add 300,000 addresses to a list without comments and still fit under 4MB. With detailed comments this falls to between 100,000-150,000 entries. But this 4MB limit is per-list so if you fill one up you can simply create another.

We hope you all will really like this new feature. It has required a lot of extensive work both in the Dashboard and our API, as such we have issued a new API version dated the 25th of May 2022.

If you want to use the new lists feature you will have to upgrade to this API version as the previous lists have been copied over to the new format, until you upgrade your API calls will still use your old lists but you won't be able to alter them until you upgrade.

Thanks for reading and have a wonderful week.


Manual Invoice Payments

Today we've introduced an improvement to the invoicing system within the customer dashboard which allows you to pay an outstanding invoice using any bank card you may have through a secure portal at our payment processor, Stripe.

This is a change that we should have introduced earlier but it has only been over the last couple of months that we've seen a large increase in the need for our customers to pay their invoices manually. This has been mainly due to recent banking regulation changes in certain countries like India which have disallowed foreign entities from billing Indian bank accounts in an automated way.

So as a result when you visit the paid options tab of the customer dashboard you'll see a toggle for invoice options and history which will reveal the below control panel.

Image description

This may look familiar if you've viewed it previously. The new part is the Pay Invoice Manually column which in our screenshot above shows the various states you can expect to see. The most recent payment can be made manually, previously paid payments are shown as such and voided payments for instance ones that were needing to be paid but we manually voided it are also shown.

In addition to these changes we've also updated our email that gets sent when a payment fails to explain how to pay your invoice manually so everyone who needs this feature should be fully aware of it when the time arises.

Thanks for reading and have a great week!


General Update

Image description

Today we wanted to take some time to update you on last months new server deployments, the war in Ukraine and what it means in the context of our service and how it will affect some of our customers.

Server Update

Firstly some of you have pointed out that we reused some of the names from our old servers for our new ones. The HELIOS, AURA and ZEUS node names were reused for our new nodes and in-fact when it comes to HELIOS we physically swapped out the server at its location meaning the new server inhabits the exact same rack as the old server.

The names EOS, RHEA and THEA were retired and we introduced ORION as a new name for one of the new servers. This name was actually chosen by one of the plugin authors we work closely with.

So how have they been performing? - In short, brilliantly. We've had the highest level of performance available since the service started. The time it takes for pages of our website to load are significantly down, especially when users access the Dashboard for the first time and a lot of their data hasn't been loaded into memory yet.

Similarly the API itself has been delivering answers with the lowest latency we've ever seen and more importantly consistently. Being able to deliver 99.99% of requests in under 9ms (before network overhead) is a remarkable achievement. This has been made possible by the marriage of our new components. The CPU our high speed memory and the flash based storage all working together to provide this consistently low access latency.

Over the past month since deployment we have come under multiple DDoS attacks in the same way that we had prior to the new server deployments and we're happy to say the new infrastructure handled these with ease. We were able to absorb the extra traffic generated by the attacks without incident. We didn't even see the average API latency increase.

We couldn't be more happy with the new infrastructure.

Ukraine

Like most of the world we're horrified by the unjustified war in Ukraine perpetrated by the Russian federation under Putin. We condemn this needless war and hope that the Ukrainian people will ultimately prevail.

As you may know from our server status page we have two servers in Eastern Europe. However they are not in Ukraine and are not in any danger at this time. If this changes we will deploy more servers in Western or Central Europe.

When it comes to our Russian customers we want to be clear, we do not blame you for the actions undertaken by your government. However we are beholden to the same sanctions as others which means we can no longer process your bank cards or accept PayPal payments from you. We've also made the decision not to accept other forms of payment that may bypass the sanctions such as cryptocurrency.

We've already heard from a few of you about this and as we've written back, we won't be accepting other forms of payment and instead your plans once they end will transition to our free tier. We won't be blocking Russian citizens from using the service but your government, its agencies and anyone we're aware of on international sanction lists will be denied service.

We know this is an unusual blog post, we've never had to discuss an ongoing war before. We have customers in Ukraine, Russia and many other European countries who are afraid right now. Our service should be the least of their problems but we are receiving support emails from customers who are affected by the war and its ramifications (such as the sanctions) so we felt it was important to address a few things here for you in public.

Thanks for reading and please keep yourselves and each other safe.


Introducing Disposable Email Detection

Image description

Since proxycheck.io started we've aimed to provide ever increasing useful data about IP addresses. At first we could tell you if an IP was a proxy server and later if it was a VPN. Then we added location, network and specific operator data.

All of this has been to empower service providers to restrict their content to real people in real places so they can protect their communities and livelihoods from the negative effects of anonymising services.

Now for the first time we're launching an entirely new type of check, email. And specifically we're going to be detecting disposable email services to let you know whether your user or customer will be contactable long term.

This is a big problem for service operators who need to keep in contact with their users and even for websites like our own where we offer a generous free plan that results in a huge amount of account creation abuse, almost all of which is enabled through the use of disposable email addresses.

And so we feel there is a good synergy here, emails are just another kind of address. We do however want to make a distinction. We're not going to be detecting privacy respecting email services as disposable when the addresses they generate are always attached to the same individual.

Which means services like iCloud which offer users a unique email address for every service they signup for would not be considered disposable and would not show as disposable on our API.

However services that make a unique mailbox available only for a short time period (minutes to days) will be shown on our API as disposable.

We've been working on this feature since last year and we feel the timing is now right to launch. As of this post disposable email detection is live and available through our latest version of the v2 API dated Dec 21. You can check IP addresses and email addresses in the same exact way, by placing them in a GET or POST request. Each email you check uses one query just like IP addresses do.

And you can even check both an IP address and an email address in the same request, just like how you can check two or more IP addresses today.

One important thing to discuss here is privacy. Sending your users email addresses to us is a very sensitive thing. What we'll be doing with them after you send them to us is a very important question.

Firstly you don't need to send us their full email. Just the @domain.tld is needed. For instance you can send the email as [email protected] instead of [email protected].

Secondly any email address we receive that isn't considered a disposable address is instantly discarded once your request to the API is fulfilled, they are not saved anywhere.

And that is because we do not want to save them, it's unnecessary for this feature to function. However any positively detected addresses, meaning we considered them as disposable will be saved just once and only in your positive detection log that is viewable only to you within your account dashboard. We will not be processing them or storing them anywhere else but your account log. And if you don't want this to happen you can even disable all logging by providing the &tag=0 flag with all your requests to us.

So how does this work with your account and daily allowance, well every account whether free or paid has the same per-day query allowance which you can now use to check email addresses in addition to IP addresses. It's just that simple, one query is one query no matter which kind of check you're performing.

And you can now whitelist or blacklist email addresses just how you can with IP addresses, ranges and AS numbers. As of today the custom rule feature isn't enabled for use on email addresses but that will likely change in the future as we expand on our email feature.

We've already shown this feature to a few developers of 3rd party proxycheck.io compatible plugins and also customers who we work closely with and we've received great feedback from both communities. In-fact we already have many feature requests including generalised email validation. While we cannot commit to such features today perhaps that is an avenue our email checking will go down in the future.

So that is the new email feature. You'll find the API documentation has been updated including our test console also found on the API page. Feel free to send in some feedback if you have some ideas or a feature request.

Thanks for reading and have a wonderful week.


European Cluster Refresh

Image description

Today we'd like to take a deep dive into our European infrastructure as we feel it'll make for an interesting blog post. If you're not looking for a long and technical analysis it's safe to skip this post as we're not announcing any new or changed features below, we're just detailing hardware today.

Why are we doing a deep dive?

Firstly the reason we're doing this is because we have moved our entire European cluster to brand new hardware. This has been something we've been attempting to do gradually since 2020 by purchasing more powerful hardware and slowly phasing out older machines.

But we reached a point where this strategy wasn't giving us enough of a capacity jump. When you grow a cluster without infinite money you usually choose to grow either high or wide. Meaning add only a few servers but have them be very performative or add many servers and have them be low to medium performing.

Naturally these decisions result in compromises. You need to factor in redundancy against hardware failure (having many nodes), performance (faster CPU's, more memory, faster storage etc) and cost ($$$).

We felt that we could compromise on the amount of nodes to vastly increase the performance of each node, so prior to today we had 6 live nodes and 1 hot-spare node for Europe. We decided to reduce this to 4 live nodes and 1 hot-spare and use the money saved from consolidating to drastically raise performance. Now the new upgrade we're about to reveal is still several times the cost of our previous infrastructure but the performance gains are much higher than the cost increase.

Put simply our cost per request falls dramatically when comparing total request capacity of our new servers vs our old servers. So without speaking more abstractly let's get to the technical details. Let's list what our servers were before and what are we operating now.

Our previous European infrastructure

For Europe our prior servers mostly consisted of Haswell era Core i7 Quad Core and XEON E3 Quad Core processor based systems. Only our newest node (THEA) operated with an 8 Core EPYC processor. Most of these servers were equipped with 32 GB of memory and exclusively used Hard Disk Drives except for THEA which used NVMe based flash storage.

We've created the graphic below to illustrate our prior hardware.

Image description

As you can see the majority of our live infrastructure were quad cores and using hard disk drives. You may be wondering why we choose to own our hardware at all as opposed to using Amazon Web Services, Google Cloud or Microsoft Azure and there are a few good reasons.

Why not Cloud hosting?

Firstly, those cloud services cost a lot of money relative to the market. And while you can scale quickly to support lots of customers you can often be blindsided by sudden increases in costs whether from database transactions, egress fees, compute or storage use etc - We estimated the cost of using these common cloud providers to be several times higher than operating our own equipment.

Secondly, those services do have outages and in multiple instances we've seen worldwide outages of both AWS and Azure. This means if we were to use these cloud providers we would need to use more than one simultaneously which complicates our software development and compounds the cost problem as we need multiple nodes running simultaneously at each cloud provider for redundancy.

Thirdly, performance. It may sound counterintuitive that these mega cloud providers don't offer the best performance when you can scale your application to hundreds or even thousands of servers. But when you're dealing with billions of requests each with a request payload under 1000 bytes the TTFB (Time to First Byte) matters. This problem is mainly due to their servers using either XEON or EPYC server grade microprocessors which feature low single-thread performance by design to allow for very high core counts in an acceptable power envelope.

One of the main reasons that we previously chose to use Core i7 and E3 XEON's is because they all have very high clock speeds and thus single-threaded performance when compared to lower clocked processors of the same architecture. It's not uncommon to receive a 2GHz E5, E7, Silver or Gold server grade XEON CPU when using one of these cloud providers where as the consumer Core i7 and workstation E3 XEON processors are regularly in the 3.5 to 4.1GHz range. This high single thread performance is important to maintain each individual requests low latency as multiple CPU threads do not work together on a single API request in our architecture.

Forth, security. One thing all these cloud providers have in common is the instances they provide are virtualised. And as we've seen over the past several years with Spectre and Meltdown the types of vulnerabilities being found make being on the same server as other individuals risky. There is always the possibility that the virtual machine host becomes compromised and the ability to read the memory of another virtual machine guest can occur.

These types of exploits aren't just theoretical anymore, real attacks like this are occurring every day on unpatched systems and as new vulnerabilities are discovered they can be exploited before mitigations become available. And while newer processors such as AMD's EPYC line now offer fully encrypted virtual machine memory as default with in-CPU hardware based cryptographic stores there's still always the possibility for vulnerabilities that undermine these added layers of security.

This has been a very strong reason for us to use dedicated hardware whenever we can as if we're the only user on the system it fully eliminates the possibility of this issue affecting our infrastructure.

Our new European infrastructure

So what exactly is the new hardware we've chosen for our European cluster? First let's show a graphic and then we'll go into more detail.

Image description

Because the new servers have so many cores we've had to make them a little smaller in the above illustration but rest assured each core here is 1x to 2x higher performing than the cores in our previous machines and as you can see there is 16 of them per server as opposed to 4 or 8 in our previous cluster.

Image description

Based on the CPU benchmarks we've performed these new servers raise performance by 4.42 times what we had before and yes you read that correctly. We would need to duplicate our old infrastructure just under 4.5 times to be the equivalent in CPU performance to our new infrastructure. That would be 24 of our old servers to match 4 of these new ones.

And that is because we're using the AMD Ryzen 9 5950X 16 Core / 32 Thread Zen 3 based microprocessor in all of our new servers. This is the fastest processor AMD sells when it comes to single-threaded performance and the fastest they sell up-to 16 cores in multithreaded performance. It has a base clock of 3.4GHz and a boost clock of 4.9GHz. And in our testing these CPU's stay at a steady 4.7GHz.

Image description

In CPU passmark our previous infrastructure with all servers combined under a multithreaded test scored 41,772 points. Our new infrastructure by comparison scores 184,652 points.

And this processor doesn't just bring the heat when it comes to performance as it also supports upto 128 GB of the fastest ECC memory. Which just so happens to be exactly what we've equipped it with as we're using 3200 MHz ECC 32 GB modules from Samsung which are the fastest JEDEC compliant ECC modules available.

As if that wasn't enough this processor also supports PCIe 4.0 which means we were able to equip each server with two 3.84TB PCIe 4.0 NVMe Enterprise drives from Samsung with a rated sequential transfer speed of 7,000MB/s and over a million IOPS. And yes we do have two of these in every server.

Image description

One of the things we've tried to do with our previous infrastructure is not allow for our hard drives to hold us back. In that endeavour we developed a tiered caching system for database reads and writes which allowed the API to run consistently fast even while serving a huge volume of unique data for every request. And while we will continue to use this system even on our new servers we now have raised the base level of disk performance by 7,000% when it comes to sequential access and 34,000% for IOPS.

Put simply this performance gain will have a dramatic effect for smoothing data access which will result in a more consistent experience for our customers accessing the API.

The numbers when you combine all our new hardware together is mind boggling. 64 Cores, 128 Threads, Half a terabyte of memory and 32TB's of the fastest NVMe based flash storage you can get. And one quick note about the storage, with this change to flash based storage for our European servers we have now eliminated all hard disk drives across all our infrastructure which includes our North American servers which have been using Flash based storage since their original deployments.

Why make this move now?

So you've seen the hardware but you may be asking why did we choose to do this now and not earlier. Well a few things aligned to drive this decision.

Firstly we've been wanting to move off our old hardware for a while due to the increase in demand for our services. We estimated that to keep up with demand in only Europe we would need to bring online a new server every 3 to 4 months. One of the things people may not consider when adding servers to a cluster is the more servers you have the smaller the impact adding one extra server has.

For instance moving from 6 servers to 7 servers only redistributes the load between them from 16.66% to 14.28%. The question at that point is will you really feel that 2.38% redistribution on each of your servers? in our experience, not really. But if you're moving from 2 to 3 servers or 4 to 5 the difference is much greater.

So in this situation we decided to grow our infrastructure higher instead of wider by reducing the number of nodes in the cluster from 6 to 4 while making each individual node as powerful as our entire previous cluster. And when we do eventually add a 5th node it will have a larger impact. We think staying around 6 nodes per region maximum is a good standard for us at the moment and as newer and faster hardware becomes available (32 core CPU's with high frequencies for instance) we may just upgrade to those as opposed to adding more servers to the cluster.

Secondly we've seen the DDoS attacks on our service increase in both severity and frequency. And although we use an Anti-DDoS service (CloudFlare). Their ability to scrub all of this attack traffic is limited because we're operating an API and not a normal website that they can just cache and re-serve to our legitimate visitors. This really necessitated us scaling up to be able to withstand these large attacks we've been receiving.

Thirdly the price of maintaining our old infrastructure was starting to become uncomfortable relative to their performance. Right now Europe is suffering through an energy crisis and our older servers offered a very low performance per watt metric when compared with newer hardware. Put simply for every watt of energy they consumed we received around 0.23 to 0.25 units of performance relative to our new hardware which delivers 1 unit of performance per watt, just over a 4 fold increase.

Forth the hardware available in the market caught up to what we wanted. When moving infrastructure like this it's a big job. There's a lot to consider like what hardware to choose, comparing benchmarks, features and upfront cost aswell as long term costs. In addition to that just deploying and setting up all of these new servers in a controlled manner with zero downtime takes considerable time and effort.

So when we decided to change servers we didn't want to do it for just 0.5-1.5x performance gains. That isn't enough of a jump to warrant all that time and effort. But a much larger 4.42x jump? well that's substantial enough to make it worthwhile. When choosing the Ryzen 9 5950X we also considered the R5 3600, R7 3700X, R9 3900, EPYC 7502P and even the Intel Core i7 8700, Core i9 9900K and Core i9 12900K.

Ultimately we decided on the Ryzen 9 5950X out of the AMD processors because it's built on their latest Zen 3 architecture which delivers incredible single thread performance while still boasting 16 cores and 32 threads. When it came to Intels offerings only the 12900K could rival the 5950X in single threaded performance but its microarchitecture with big.LITTLE core structure was unappealing to us and does let it down in multithreaded workloads. It also doesn't support ECC memory.

It took the market quite a while to deliver processors at the level we just discussed. From about 2007 to 2017 quad core processors reigned supreme in the mainstream of the market and most affordable server hosts were only deploying those which is why we ended up with so many of them in our infrastructure. Since 2017 though we've seen a steady increase in core counts with AMD offering 16 cores on their mainstream desktop platform and 128 cores on their dual-socket server platform.

As we mentioned a couple of times above it's great having many cores but single threaded performance still matters which is why we continue to use these more consumer orientated microprocessors which offer much higher frequencies than their server equivalents. It just so happens with the 5950X we didn't need to sacrifice desirable features common to servers such as high core counts (16 Cores) fast I/O (PCIe 4.0) large quantities of RAM (128GB) and ECC (error correcting) memory support.

The last thing we wanted to discuss is redundancy. As we mentioned before when moving from 6 to 4 servers for Europe we considered the redundancy and decided it was a worthwhile compromise. Part of that rationale is because we're not placing all four servers in one datacenter. They have been placed in three geographically separated datacenters across multiple countries.

Conclusion

So that's our infrastructure deep dive for Europe. This is all part of a wider infrastructure plan as we still seek to find good hosting opportunities in Asia. We've tested a few servers last year in Asia and while none of them were able to deliver to our high standards we continue to look and be optimistic we will find something within our budget that has the performance and reliability we require.

Thanks for reading, we hope this was interesting and have a wonderful week.


Back