Introducing Custom Lists

Image description

Today we've launched a brand new feature we're calling Custom Lists which allows you to create customised lists of addresses, address ranges, autonomous system numbers (ASN's) and email domains for use in Blacklists, Whitelists or Custom Rules.

Image description

This new feature replaces our previous Whitelist and Blacklist tabs within the Dashboard with a new unified Custom Lists tab. And as you can see above it looks similar to our current Custom Rules tab but once expanded these lists offer an entirely different set of controls as shown below.

Image description

In the above screenshot we're showing the default Manual Editing Mode which lets you add information to this list just as you could with the previous Whitelist and Blacklist features. You can also manipulate what is contained within this list by the Dashboard API. The previous API still works but we've also made a new one which offers control of any kind of list, not just White and Blacklists.

A brand new feature however is the ability to have your lists automatically downloaded from your own website on a regular schedule. When selecting the Automated Mode the list changes in appearance and allows you to specify a URL and the frequency at which you would like your list downloaded. Below is a screenshot of that interface.

Image description

In addition to still offering the whitelist and blacklist functionality as before you can now also create named lists for any eventuality which can be leveraged solely by your Custom Rules allowing for additional levels of customisability.

We've also added toggle buttons to each list allowing you to easily turn them off and on without needing to erase their contents and as we've followed our Custom Rules design guidelines you can now move the lists around and easily export them for local backup.

To integrate this feature with Custom Rules we came up with a clever solution, when entering custom values into a rule condition you'll now be suggested your Custom Lists by name which you can then click to add into the box and those lists will then be consulted by that rule when its conditions call for it.

The last thing to discuss is how much does it cost, how many lists can you create and what is this 4MB size limit per list?

Well to answer the first question, this feature does not cost anything extra. You can create as many lists as you like but you may only enable a certain number of lists dependant on your plan tier. Free users can enable 3 lists while paid plans can enable between 5 and 60 lists with custom plans able to go beyond that 60 limit.

As for the 4MB size limit we've found in testing it is possible to add 300,000 addresses to a list without comments and still fit under 4MB. With detailed comments this falls to between 100,000-150,000 entries. But this 4MB limit is per-list so if you fill one up you can simply create another.

We hope you all will really like this new feature. It has required a lot of extensive work both in the Dashboard and our API, as such we have issued a new API version dated the 25th of May 2022.

If you want to use the new lists feature you will have to upgrade to this API version as the previous lists have been copied over to the new format, until you upgrade your API calls will still use your old lists but you won't be able to alter them until you upgrade.

Thanks for reading and have a wonderful week.


Manual Invoice Payments

Today we've introduced an improvement to the invoicing system within the customer dashboard which allows you to pay an outstanding invoice using any bank card you may have through a secure portal at our payment processor, Stripe.

This is a change that we should have introduced earlier but it has only been over the last couple of months that we've seen a large increase in the need for our customers to pay their invoices manually. This has been mainly due to recent banking regulation changes in certain countries like India which have disallowed foreign entities from billing Indian bank accounts in an automated way.

So as a result when you visit the paid options tab of the customer dashboard you'll see a toggle for invoice options and history which will reveal the below control panel.

Image description

This may look familiar if you've viewed it previously. The new part is the Pay Invoice Manually column which in our screenshot above shows the various states you can expect to see. The most recent payment can be made manually, previously paid payments are shown as such and voided payments for instance ones that were needing to be paid but we manually voided it are also shown.

In addition to these changes we've also updated our email that gets sent when a payment fails to explain how to pay your invoice manually so everyone who needs this feature should be fully aware of it when the time arises.

Thanks for reading and have a great week!


General Update

Image description

Today we wanted to take some time to update you on last months new server deployments, the war in Ukraine and what it means in the context of our service and how it will affect some of our customers.

Server Update

Firstly some of you have pointed out that we reused some of the names from our old servers for our new ones. The HELIOS, AURA and ZEUS node names were reused for our new nodes and in-fact when it comes to HELIOS we physically swapped out the server at its location meaning the new server inhabits the exact same rack as the old server.

The names EOS, RHEA and THEA were retired and we introduced ORION as a new name for one of the new servers. This name was actually chosen by one of the plugin authors we work closely with.

So how have they been performing? - In short, brilliantly. We've had the highest level of performance available since the service started. The time it takes for pages of our website to load are significantly down, especially when users access the Dashboard for the first time and a lot of their data hasn't been loaded into memory yet.

Similarly the API itself has been delivering answers with the lowest latency we've ever seen and more importantly consistently. Being able to deliver 99.99% of requests in under 9ms (before network overhead) is a remarkable achievement. This has been made possible by the marriage of our new components. The CPU our high speed memory and the flash based storage all working together to provide this consistently low access latency.

Over the past month since deployment we have come under multiple DDoS attacks in the same way that we had prior to the new server deployments and we're happy to say the new infrastructure handled these with ease. We were able to absorb the extra traffic generated by the attacks without incident. We didn't even see the average API latency increase.

We couldn't be more happy with the new infrastructure.

Ukraine

Like most of the world we're horrified by the unjustified war in Ukraine perpetrated by the Russian federation under Putin. We condemn this needless war and hope that the Ukrainian people will ultimately prevail.

As you may know from our server status page we have two servers in Eastern Europe. However they are not in Ukraine and are not in any danger at this time. If this changes we will deploy more servers in Western or Central Europe.

When it comes to our Russian customers we want to be clear, we do not blame you for the actions undertaken by your government. However we are beholden to the same sanctions as others which means we can no longer process your bank cards or accept PayPal payments from you. We've also made the decision not to accept other forms of payment that may bypass the sanctions such as cryptocurrency.

We've already heard from a few of you about this and as we've written back, we won't be accepting other forms of payment and instead your plans once they end will transition to our free tier. We won't be blocking Russian citizens from using the service but your government, its agencies and anyone we're aware of on international sanction lists will be denied service.

We know this is an unusual blog post, we've never had to discuss an ongoing war before. We have customers in Ukraine, Russia and many other European countries who are afraid right now. Our service should be the least of their problems but we are receiving support emails from customers who are affected by the war and its ramifications (such as the sanctions) so we felt it was important to address a few things here for you in public.

Thanks for reading and please keep yourselves and each other safe.


Introducing Disposable Email Detection

Image description

Since proxycheck.io started we've aimed to provide ever increasing useful data about IP addresses. At first we could tell you if an IP was a proxy server and later if it was a VPN. Then we added location, network and specific operator data.

All of this has been to empower service providers to restrict their content to real people in real places so they can protect their communities and livelihoods from the negative effects of anonymising services.

Now for the first time we're launching an entirely new type of check, email. And specifically we're going to be detecting disposable email services to let you know whether your user or customer will be contactable long term.

This is a big problem for service operators who need to keep in contact with their users and even for websites like our own where we offer a generous free plan that results in a huge amount of account creation abuse, almost all of which is enabled through the use of disposable email addresses.

And so we feel there is a good synergy here, emails are just another kind of address. We do however want to make a distinction. We're not going to be detecting privacy respecting email services as disposable when the addresses they generate are always attached to the same individual.

Which means services like iCloud which offer users a unique email address for every service they signup for would not be considered disposable and would not show as disposable on our API.

However services that make a unique mailbox available only for a short time period (minutes to days) will be shown on our API as disposable.

We've been working on this feature since last year and we feel the timing is now right to launch. As of this post disposable email detection is live and available through our latest version of the v2 API dated Dec 21. You can check IP addresses and email addresses in the same exact way, by placing them in a GET or POST request. Each email you check uses one query just like IP addresses do.

And you can even check both an IP address and an email address in the same request, just like how you can check two or more IP addresses today.

One important thing to discuss here is privacy. Sending your users email addresses to us is a very sensitive thing. What we'll be doing with them after you send them to us is a very important question.

Firstly you don't need to send us their full email. Just the @domain.tld is needed. For instance you can send the email as [email protected] instead of [email protected]

Secondly any email address we receive that isn't considered a disposable address is instantly discarded once your request to the API is fulfilled, they are not saved anywhere.

And that is because we do not want to save them, it's unnecessary for this feature to function. However any positively detected addresses, meaning we considered them as disposable will be saved just once and only in your positive detection log that is viewable only to you within your account dashboard. We will not be processing them or storing them anywhere else but your account log. And if you don't want this to happen you can even disable all logging by providing the &tag=0 flag with all your requests to us.

So how does this work with your account and daily allowance, well every account whether free or paid has the same per-day query allowance which you can now use to check email addresses in addition to IP addresses. It's just that simple, one query is one query no matter which kind of check you're performing.

And you can now whitelist or blacklist email addresses just how you can with IP addresses, ranges and AS numbers. As of today the custom rule feature isn't enabled for use on email addresses but that will likely change in the future as we expand on our email feature.

We've already shown this feature to a few developers of 3rd party proxycheck.io compatible plugins and also customers who we work closely with and we've received great feedback from both communities. In-fact we already have many feature requests including generalised email validation. While we cannot commit to such features today perhaps that is an avenue our email checking will go down in the future.

So that is the new email feature. You'll find the API documentation has been updated including our test console also found on the API page. Feel free to send in some feedback if you have some ideas or a feature request.

Thanks for reading and have a wonderful week.


European Cluster Refresh

Image description

Today we'd like to take a deep dive into our European infrastructure as we feel it'll make for an interesting blog post. If you're not looking for a long and technical analysis it's safe to skip this post as we're not announcing any new or changed features below, we're just detailing hardware today.

Why are we doing a deep dive?

Firstly the reason we're doing this is because we have moved our entire European cluster to brand new hardware. This has been something we've been attempting to do gradually since 2020 by purchasing more powerful hardware and slowly phasing out older machines.

But we reached a point where this strategy wasn't giving us enough of a capacity jump. When you grow a cluster without infinite money you usually choose to grow either high or wide. Meaning add only a few servers but have them be very performative or add many servers and have them be low to medium performing.

Naturally these decisions result in compromises. You need to factor in redundancy against hardware failure (having many nodes), performance (faster CPU's, more memory, faster storage etc) and cost ($$$).

We felt that we could compromise on the amount of nodes to vastly increase the performance of each node, so prior to today we had 6 live nodes and 1 hot-spare node for Europe. We decided to reduce this to 4 live nodes and 1 hot-spare and use the money saved from consolidating to drastically raise performance. Now the new upgrade we're about to reveal is still several times the cost of our previous infrastructure but the performance gains are much higher than the cost increase.

Put simply our cost per request falls dramatically when comparing total request capacity of our new servers vs our old servers. So without speaking more abstractly let's get to the technical details. Let's list what our servers were before and what are we operating now.

Our previous European infrastructure

For Europe our prior servers mostly consisted of Haswell era Core i7 Quad Core and XEON E3 Quad Core processor based systems. Only our newest node (THEA) operated with an 8 Core EPYC processor. Most of these servers were equipped with 32 GB of memory and exclusively used Hard Disk Drives except for THEA which used NVMe based flash storage.

We've created the graphic below to illustrate our prior hardware.

Image description

As you can see the majority of our live infrastructure were quad cores and using hard disk drives. You may be wondering why we choose to own our hardware at all as opposed to using Amazon Web Services, Google Cloud or Microsoft Azure and there are a few good reasons.

Why not Cloud hosting?

Firstly, those cloud services cost a lot of money relative to the market. And while you can scale quickly to support lots of customers you can often be blindsided by sudden increases in costs whether from database transactions, egress fees, compute or storage use etc - We estimated the cost of using these common cloud providers to be several times higher than operating our own equipment.

Secondly, those services do have outages and in multiple instances we've seen worldwide outages of both AWS and Azure. This means if we were to use these cloud providers we would need to use more than one simultaneously which complicates our software development and compounds the cost problem as we need multiple nodes running simultaneously at each cloud provider for redundancy.

Thirdly, performance. It may sound counterintuitive that these mega cloud providers don't offer the best performance when you can scale your application to hundreds or even thousands of servers. But when you're dealing with billions of requests each with a request payload under 1000 bytes the TTFB (Time to First Byte) matters. This problem is mainly due to their servers using either XEON or EPYC server grade microprocessors which feature low single-thread performance by design to allow for very high core counts in an acceptable power envelope.

One of the main reasons that we previously chose to use Core i7 and E3 XEON's is because they all have very high clock speeds and thus single-threaded performance when compared to lower clocked processors of the same architecture. It's not uncommon to receive a 2GHz E5, E7, Silver or Gold server grade XEON CPU when using one of these cloud providers where as the consumer Core i7 and workstation E3 XEON processors are regularly in the 3.5 to 4.1GHz range. This high single thread performance is important to maintain each individual requests low latency as multiple CPU threads do not work together on a single API request in our architecture.

Forth, security. One thing all these cloud providers have in common is the instances they provide are virtualised. And as we've seen over the past several years with Spectre and Meltdown the types of vulnerabilities being found make being on the same server as other individuals risky. There is always the possibility that the virtual machine host becomes compromised and the ability to read the memory of another virtual machine guest can occur.

These types of exploits aren't just theoretical anymore, real attacks like this are occurring every day on unpatched systems and as new vulnerabilities are discovered they can be exploited before mitigations become available. And while newer processors such as AMD's EPYC line now offer fully encrypted virtual machine memory as default with in-CPU hardware based cryptographic stores there's still always the possibility for vulnerabilities that undermine these added layers of security.

This has been a very strong reason for us to use dedicated hardware whenever we can as if we're the only user on the system it fully eliminates the possibility of this issue affecting our infrastructure.

Our new European infrastructure

So what exactly is the new hardware we've chosen for our European cluster? First let's show a graphic and then we'll go into more detail.

Image description

Because the new servers have so many cores we've had to make them a little smaller in the above illustration but rest assured each core here is 1x to 2x higher performing than the cores in our previous machines and as you can see there is 16 of them per server as opposed to 4 or 8 in our previous cluster.

Image description

Based on the CPU benchmarks we've performed these new servers raise performance by 4.42 times what we had before and yes you read that correctly. We would need to duplicate our old infrastructure just under 4.5 times to be the equivalent in CPU performance to our new infrastructure. That would be 24 of our old servers to match 4 of these new ones.

And that is because we're using the AMD Ryzen 9 5950X 16 Core / 32 Thread Zen 3 based microprocessor in all of our new servers. This is the fastest processor AMD sells when it comes to single-threaded performance and the fastest they sell up-to 16 cores in multithreaded performance. It has a base clock of 3.4GHz and a boost clock of 4.9GHz. And in our testing these CPU's stay at a steady 4.7GHz.

Image description

In CPU passmark our previous infrastructure with all servers combined under a multithreaded test scored 41,772 points. Our new infrastructure by comparison scores 184,652 points.

And this processor doesn't just bring the heat when it comes to performance as it also supports upto 128 GB of the fastest ECC memory. Which just so happens to be exactly what we've equipped it with as we're using 3200 MHz ECC 32 GB modules from Samsung which are the fastest JEDEC compliant ECC modules available.

As if that wasn't enough this processor also supports PCIe 4.0 which means we were able to equip each server with two 3.84TB PCIe 4.0 NVMe Enterprise drives from Samsung with a rated sequential transfer speed of 7,000MB/s and over a million IOPS. And yes we do have two of these in every server.

Image description

One of the things we've tried to do with our previous infrastructure is not allow for our hard drives to hold us back. In that endeavour we developed a tiered caching system for database reads and writes which allowed the API to run consistently fast even while serving a huge volume of unique data for every request. And while we will continue to use this system even on our new servers we now have raised the base level of disk performance by 7,000% when it comes to sequential access and 34,000% for IOPS.

Put simply this performance gain will have a dramatic effect for smoothing data access which will result in a more consistent experience for our customers accessing the API.

The numbers when you combine all our new hardware together is mind boggling. 64 Cores, 128 Threads, Half a terabyte of memory and 32TB's of the fastest NVMe based flash storage you can get. And one quick note about the storage, with this change to flash based storage for our European servers we have now eliminated all hard disk drives across all our infrastructure which includes our North American servers which have been using Flash based storage since their original deployments.

Why make this move now?

So you've seen the hardware but you may be asking why did we choose to do this now and not earlier. Well a few things aligned to drive this decision.

Firstly we've been wanting to move off our old hardware for a while due to the increase in demand for our services. We estimated that to keep up with demand in only Europe we would need to bring online a new server every 3 to 4 months. One of the things people may not consider when adding servers to a cluster is the more servers you have the smaller the impact adding one extra server has.

For instance moving from 6 servers to 7 servers only redistributes the load between them from 16.66% to 14.28%. The question at that point is will you really feel that 2.38% redistribution on each of your servers? in our experience, not really. But if you're moving from 2 to 3 servers or 4 to 5 the difference is much greater.

So in this situation we decided to grow our infrastructure higher instead of wider by reducing the number of nodes in the cluster from 6 to 4 while making each individual node as powerful as our entire previous cluster. And when we do eventually add a 5th node it will have a larger impact. We think staying around 6 nodes per region maximum is a good standard for us at the moment and as newer and faster hardware becomes available (32 core CPU's with high frequencies for instance) we may just upgrade to those as opposed to adding more servers to the cluster.

Secondly we've seen the DDoS attacks on our service increase in both severity and frequency. And although we use an Anti-DDoS service (CloudFlare). Their ability to scrub all of this attack traffic is limited because we're operating an API and not a normal website that they can just cache and re-serve to our legitimate visitors. This really necessitated us scaling up to be able to withstand these large attacks we've been receiving.

Thirdly the price of maintaining our old infrastructure was starting to become uncomfortable relative to their performance. Right now Europe is suffering through an energy crisis and our older servers offered a very low performance per watt metric when compared with newer hardware. Put simply for every watt of energy they consumed we received around 0.23 to 0.25 units of performance relative to our new hardware which delivers 1 unit of performance per watt, just over a 4 fold increase.

Forth the hardware available in the market caught up to what we wanted. When moving infrastructure like this it's a big job. There's a lot to consider like what hardware to choose, comparing benchmarks, features and upfront cost aswell as long term costs. In addition to that just deploying and setting up all of these new servers in a controlled manner with zero downtime takes considerable time and effort.

So when we decided to change servers we didn't want to do it for just 0.5-1.5x performance gains. That isn't enough of a jump to warrant all that time and effort. But a much larger 4.42x jump? well that's substantial enough to make it worthwhile. When choosing the Ryzen 9 5950X we also considered the R5 3600, R7 3700X, R9 3900, EPYC 7502P and even the Intel Core i7 8700, Core i9 9900K and Core i9 12900K.

Ultimately we decided on the Ryzen 9 5950X out of the AMD processors because it's built on their latest Zen 3 architecture which delivers incredible single thread performance while still boasting 16 cores and 32 threads. When it came to Intels offerings only the 12900K could rival the 5950X in single threaded performance but its microarchitecture with big.LITTLE core structure was unappealing to us and does let it down in multithreaded workloads. It also doesn't support ECC memory.

It took the market quite a while to deliver processors at the level we just discussed. From about 2007 to 2017 quad core processors reigned supreme in the mainstream of the market and most affordable server hosts were only deploying those which is why we ended up with so many of them in our infrastructure. Since 2017 though we've seen a steady increase in core counts with AMD offering 16 cores on their mainstream desktop platform and 128 cores on their dual-socket server platform.

As we mentioned a couple of times above it's great having many cores but single threaded performance still matters which is why we continue to use these more consumer orientated microprocessors which offer much higher frequencies than their server equivalents. It just so happens with the 5950X we didn't need to sacrifice desirable features common to servers such as high core counts (16 Cores) fast I/O (PCIe 4.0) large quantities of RAM (128GB) and ECC (error correcting) memory support.

The last thing we wanted to discuss is redundancy. As we mentioned before when moving from 6 to 4 servers for Europe we considered the redundancy and decided it was a worthwhile compromise. Part of that rationale is because we're not placing all four servers in one datacenter. They have been placed in three geographically separated datacenters across multiple countries.

Conclusion

So that's our infrastructure deep dive for Europe. This is all part of a wider infrastructure plan as we still seek to find good hosting opportunities in Asia. We've tested a few servers last year in Asia and while none of them were able to deliver to our high standards we continue to look and be optimistic we will find something within our budget that has the performance and reliability we require.

Thanks for reading, we hope this was interesting and have a wonderful week.


Subscription Plan Price Increases

Image description

Today we have increased the prices of two business plans and all our enterprise plans. Before we get into the new pricing it's important to make clear these prices only apply to newly started plans and plan alterations. This means they do not apply to you if you're already subscribed to an affected plan, you will continue to pay the previous lower price until you cancel your subscription or upgrade/downgrade your subscription.

So let's first show the old and new prices and then we'll explain why we've done this.

Image description

As you can see our Enterprise plans have doubled in price and we've also increased the pricing on our two highest Business plans by $5 and $10 per month. There are a few reasons we've made these changes which we'll now go over.

Firstly we're still much more affordable than our competition even after the pricing changes. In-fact our highest priced enterprise plan is still lower than our competitors lowest business plans while offering hundreds of thousands more queries per day and millions more queries per month. To put it simply, we were priced too low compared to the market.

Secondly by charging our largest customers more when they are the ones most likely to be in a position to afford it we can upgrade our hardware and expand our infrastructure to allow for more customers in the future. These large customers also create the biggest load on the service which leads into our next point.

Thirdly we wanted to lock our per-query price point. Once you reach 2.56 Million daily queries it doesn't make sense to receive a discount for using even more resources. At that point you need what you need and having a slightly lower price per query isn't a viable upsell proposition and if you are making such a large purchase of daily queries we've found you most likely will use most of your allowance which means more incurred cost for us.

So you may notice the $49.99, $99.99, $149.99 and $199.99 price points all correspond to the same cost per query. Meaning you don't receive a lowered cost per query for switching between these plans you simply trade a linear amount of queries, e.g. 2x more money for 2x more queries. This makes these larger plans more sustainable and helps us pay for the infrastructure commensurate to the burden these plans put on our service.

Like we mentioned at the start these prices only come into play for people starting a new plan or changing plans. We've always done it this way, the price you pay at the moment you subscribe will always be the same when it comes time to renew. If you're on a pre-paid plan for instance paying by PayPal we will of course honour the pricing you paid for your most recent payment when it comes time to renew.

We're also prepared to change your plan to one of the ones listed above for you manually until February 25th 2022 meaning you can upgrade to one of our plans that have changed price and only pay the previous lower prices and not the new prices if you're currently subscribed to a plan and contact our support before Feb 25th 2022.

In addition to the price changes we've also increased the amount of custom rules the enterprise plans can have enabled at one time to be more in line with our plan-to-plan increases in custom rule allowances, these are also displayed in the above screenshot.

We know when it comes to price increases it's always disappointing but we think we've reached a fair balance where those who use the most are covering the lions share of their costs to us and the increases don't apply to anyone currently subscribed to the service who have helped to build our service into what it is today through their patronage.

Thank you for reading and we hope you have a wonderful week.


New Custom Rule Feature: Continued Execution

Image description

Since introducing the Custom Rules feature we've made many improvements but there has been one aspect that hasn't changed, when the conditions of a rule are met it runs and then no other rules positioned below it are allowed to run even if their conditions would also be met.

We did this originally for one main reason, it made rules simpler for customers to understand, specifically they knew when a rule ran that the other rules would not run making it easier to visualise the cause and effect of their created rules.

As time has moved on though this reason make less and less sense. Firstly custom rules has grown substantially in its available feature set. We first introduced optional condition groups and then the rule library and most recently managed rules. In addition to that we've vastly expanded the available data providers, comparison types and output modifiers that can be used.

All of those changes were made to increase the utility of rules and as a result the complexity has naturally increased too. But we think by enabling the ability to have multiple rules run one after another we can reduce some of that complexity because we know some customers are having to make extremely detailed and complicated rules due to only a single rule being able to run per query.

Image description

And so that is why when a rule is expanded you'll now see this new toggle as shown above which when enabled will allow your rules to continue to be processed even if a specific rule is triggered.

By default this toggle will be off for all newly created rules and past saved rules as we think most of the time you'll only want one rule to run but now when you do need more than one rule to run you can easily change that behaviour. This toggle will be available on all types of rules including managed ones where you'll decide whether to have rule processing continue or not.

So that's the update for today, it sure has been a week full of changes hasn't it! - And we still have more to come at the start of next month so make sure to check back for that!


Introducing Easy Plan Alterations

Image description

Today we've introduced a new feature to the customer dashboard that we know is going to be very popular. The ability for you to alter your paid plan at any time for both upgrading and downgrading with prorated pricing.

This means as your needs grow you can easily increase your plan size yourself while only paying the difference between your current plan and new plan or if your needs go down you can downgrade your plan and receive back a monetary credit which will be used against future invoices.

Prior to today you had to contact our support to have plan alterations performed for you and we know this was suboptimal, not only did it add friction for customers wanting to alter their plans but it increased our support burden.

We did in-fact start to implement this feature during the height of the COVID pandemic where we saw very high plan increase requests due to more people working and spending time at home on the internet resulting in higher query usage by the websites and services utilising our API. And although other features began to take precedence we're very happy to finally bring this feature to fruition.

Image description

And we certainly think it was well worth the wait to implement it properly, as can be seen in the screenshot above it's very easy to change plans. We know that many companies make it easy to start a plan but they don't always make it easy to downgrade or cancel. The dreaded "contact our support" to facilitate a downgrade or cancellation often leads to an annoying sales pitch.

We fully reject this way of doing things which is why we've added not just the ability to upgrade your plan but also downgrade with credit being applied to your account balance that will be used automatically for future invoices.

In addition to upgrading and downgrading plans you can also switch from monthly to yearly or yearly to monthly plans. So if you've been a customer for a long time paying monthly you can now easily switch to paying yearly for our 8.33% discount.

One last thing to mention, if you do happen to have a balance credit this will now be shown in your dashboard along the top information bar so you can keep track of any funds you have to be used for future payments.

Thanks for reading and we hope everyone is having a great week!


New European Server Node Introduced

Image description

Today we've introduced a new node to our cluster called THEA which increases our European capacity by 17% in request terms but actually increases our load capacity by about 30% for that region in processing capacity.

This is running the same hardware configuration as our LETO node we introduced in November last year for North America which means it's running one of the latest EPYC processors from AMD which features a very high thread count and Instructions Per Clock (IPC).

We've done this for three main reasons.

Firstly request load has been steadily increasing over the past few months which means we needed to expand our footprint to support new customers. We did that already for North America late last year with LETO and now in Europe with THEA.

Secondly we have been planning to migrate our server nodes to higher specification servers anyway. This new baseline includes using the latest generation of high core count processors from AMD and PCIe based NVMe flash storage. Both LETO and THEA meet these new criterias.

Thirdly as you may have read about yesterday the attacks against us are increasing in frequency and severity. This reality means it's important we have extra capacity beyond what is merely required to run our service.

And while we do heavily rely on our CDN partner CloudFlare to scrub attack traffic for us having our own infrastructure be able to withstand some of the attack traffic we're receiving still plays an important role. They can't always react fast enough or immediately decipher which of our traffic is legitimate and which is malicious due to our service being an API that is accessed by headless servers for the vast majority of requests.

So that is our announcement for today. We will be announcing a new feature early next month so make sure to check back for that. Until then, thanks for reading and have a wonderful week.


Major service disruption

Image description

Today between 12:25 PM and 1:15 PM GMT we suffered a major outage. At its peak just over half of all traffic sent to our servers did not receive any kind of response.

This was due to a very large attack on our infrastructure that didn't trigger our anti-DDoS protection due to the way the attack originated from a very large number of source addresses and created traffic similar to our legitimate customers. In addition to this one of our server nodes was offline before the attack began due to an unrelated fault which removed 25% of our North American cluster capacity.

The attack came to an end when we were able to mitigate the attack manually by engaging certain controls at our CDN partner and that immediately brought service back into normal operation.

With attacks against the service becoming more frequent we will be spending even more time looking at our mitigation strategy, today we were slow to react to this attack because our automatic system didn't engage and when trying to deal with it manually we found it difficult to pinpoint which addresses were launching the attack amongst our normal traffic.

Although we saw our traffic was several times greater than normal we couldn't identify quickly which addresses were part of the attack and which was legitimate customer traffic due to the attack purposefully mimicking legitimate requests to our service.

If we have anything more to share about this attack we will update this post. Until then, we are very sorry this occurred and we will strive to do better.


Back