Asian Infrastructure Expansion

Image description

Today we've deployed a new server in Singapore to double the capacity and redundancy of our Asian infrastructure. We're doing this because last month we had an ill-timed fault with our only Asian server which we weren't able to resolve for about 10 days and so we wanted to expand our Asian infrastructure so that our customers don't face higher latency if the same thing were to occur again and also if we're performing scheduled maintenance.

You may have seen in the news recently that the price of memory and to a lesser extent storage has increased in price due to demand far exceeding worldwide supply. This has had an impact on our urgency to add an extra server before the pricing became completely unreasonable, already this server while being the same specification as our other Singaporean server will cost us twice as much every month and it had a significant up-front allocation fee when in the past no fee was imposed.

In addition, while Singapore is a great place to host servers as it has great network connectivity to surrounding countries and a stable government it has some of the highest square-meter land costs in the world and as a result datacenter space is sold at an extreme premium. We are looking to pay eight times higher prices for servers in Singapore when compared to India and three times higher costs compared to Europe. So us having multiple servers there is a significant monetary investment but one we know will benefit our growing customer base.

So that's the update for today, the new server called PLUTO is live and accepting requests right now. As always, thanks for reading and have a wonderful week.


New Pricing Live

Image description

As we mentioned in this blog post on November 1st we would be increasing our prices in January 2026, and as it's the first day of January we decided to do it today. We also wanted to note that we kept large disclaimers available for two months on both our pricing page and customer dashboard, explaining how to lock in the lower prices ahead of today. This way, everyone was well-informed about the upcoming price increase and could plan accordingly.

Every price of every plan we offer has increased however the percentage of the increase differs between plans with the largest proptional increase in price focusing on our Business and Enterprise plans. For our most popular Starter plans we've increased the prices by only $1 which is effectively a 33% to 20% increase depending on the Starter plan.

This means our previous lowest plan, which has been $2.99 since January 2020, is now $3.99. We think this still offers incredible value and we've not changed any plans quantity of daily queries, custom rules, custom lists or burst tokens. Same features, same service just at a new price.

For our Pro plans we've raised the first plans price from $6.99 to $9.99 and we like the idea of seperating each of our three Pro plans by $5 instead of the previous $3.

As mentioned, the largest increase comes to our Business and Enterprise plans. The first Business plan which used to be $19.99 is now $29.99 which represents a 50% increase. Similarly, the last Buisiness plan, which was $49.99 is now $99.99, which represents a 100% increase in price. We've done this because the cost to deliver this plan (2.56 Million daily queries) is substantial and we want to be able to purchase more powerful servers to better serve the high capacity needs of our largest customers.

As we move to the Enterprise plans, the first plan has also seen a 100% increase from $99.99 to $199.99 and that's because we're sticking to a linear cost-per-query structure beginning at the 1.28 Million daily queries mark (Business plan, $49.99 per month). So for every 1.28 million daily queries, it'll cost $49.99, which is why the 5.12 Million plan is now $199.99 (4 x $49.99 essentially plus a few cents) and our largest 10.24 Million plan is now $399.99, up from $199.99. We do want to point out that we're one of the few SaaS businesses in the industry that has transparent and available enterprise pricing right on the webpage. We don't do "call us for pricing" sales tactics that lead to upselling.

Now as always, when we increase our prices, they only apply to newly started or altered plans. So the subscription you already hold will not change in price, you will continue to pay your existing lower price until you cancel your plan. We've also updated the plan section area within the Dashboard to clearly show that you're subscribed to a plan at a lower price than currently offered so it's not confusing.

We know that nobody likes price increases, we've been able to keep our prices the same since 2020 but unfortuantely the higher costs of everything, including energy, infrastructure, advertising and more, have meant we needed to increase our prices. Thankfully, we have a lot of plans, which means we can offer pricing that fits exactly what you need to save you the most amount of money and we've been able to protect our lowest cost starter plans, keeping them very competitively priced, which maintains the accessibility of our service.

And of course, we still offer our full-featured free plan that has complete access to the exact same API results as our paid plans with 1,000 daily queries. We're fully committed to our free plan and we have no plans to stop offering it or degrade it in any way.

Thanks for reading, and have a wonderful new year.


Introducing Our New Batch Lookup Web Interface

Image description

Our Brand New Web Interface: Speed Meets Simplicity

We're thrilled to unveil a completely redesigned web interface for proxycheck.io, now available at https://proxycheck.io/web/. This isn't just a visual refresh it's a ground-up reimagining of how you perform bulk lookups, combining a modern user interface with class-leading performance and full v3 API compatibility.

A Fresh, Intuitive Design

The new interface features a clean, modern design that makes checking IP addresses more intuitive than ever. Here's what the main overview of results looks like:

Image description

This new pane interface shows more addresses in a denser way while maintaining access to important controls like copying. New with this interface is an expand feature which will pop out a pane into a larger modal.

Rich Contextual Information

We've added intelligent tooltips throughout the interface that provide detailed explanations without cluttering the main view. Hover over any element to reveal comprehensive information including the specific detection types, location and network data.

Image description

Since our new v3 API supports multiple detection types for a single address we felt it was important to include the colourful detection icons within this context tooltip so that you can identify at a glance all the various detections we've made about a single address.

Detailed Results Breakdown

Below the main overview though you'll find an expanded results section that provides granular details about each address you've checked:

Image description

These detailed breakdown panes will change size depending on how many varied results you're receiving to maximise the viewable area of the page. They are also color-coded for glancing and of course the colors and icons match those found within the contextual tooltips for visual coherencey.

Unprecedented Performance

Beyond the visual improvements, the new interface delivers exceptional performance that sets a new standard for bulk address checking and we've also increased the maximum amount of addresses you can check in a single request through the interface from 10,000 to 20,000.

  • Check up to 20,000 addresses in a single request
  • Check 1,000 addresses in just 200 milliseconds
  • Check 10,000 addresses in under 1.7 seconds

Don't just take our word for it, below is a real-world performance example checking 10,000 varied addresses:

Image description

This blazing-fast performance means you can validate large lists of addresses in real-time, without the traditional waiting you may have experienced with our previous web interface that utilised our older v2 API.

Try It Now

The new web interface is live and ready to use at https://proxycheck.io/web/. We invite you to experience the speed and simplicity for yourself.

As always, we'd love to hear your feedback. Let us know what you think of the new design and how it's working for your use cases. Your input helps us continue improving the service. Thanks for reading and have a wonderful week!


v3 API November development update

Image description

New API Beta: Enhanced Detection Insights

We're excited to announce the release of our new API beta version, dated November 20th, 2025. This update brings new responses that give you deeper insights into detection results and help you make more informed decisions. This is our third dated version of the v3 API as the beta continues.

What's New in the Beta

Confidence Scores

The headline feature of this release is the addition of confidence scores for all detections. Every detection now includes a confidence value ranging from 0 to 100, indicating how certain the API is about its findings. This numerical score gives you immediate insight into the reliability of each detection, allowing you to:

  • Filter results based on confidence thresholds
  • Prioritize high-confidence detections for immediate action
  • Flag low-confidence results for manual review
  • Make more nuanced decisions based on detection certainty

The score is heavily weighted towards recent detections meaning as the time grows between now and our last detection the confidence score will reduce. This allows us to present results for a longer time period because the confidence score will help you decide how to action the result.

One thing to note about this release, because of the new confidence score some data that was previously expired quickly will linger for a much longer time. Due to this we're going to be hiding results on the API if you don't supply an &days= flag with your requests so that we can provide a safe default.

As an example, residential proxy results typically expire within 48-hours but you can keep them visible for much longer by supplying &days=14 which would keep them displayed for 14-days whilst the confidence score will degrade the further out from our latest detection.

Temporal Detection Data

We've also added two new fields to the detection section that provide important temporal context:

  • first_seen: When the detection was initially identified
  • last_seen: The most recent occurrence of the detection

These timestamps will help you understand the timeline of detections and track persistence.

Lookup Page Updates

We've also upgraded our lookup pages to take full advantage of the new API version. The interface now displays:

  • Confidence scores for all detections
  • First seen and last seen timestamps
  • Enhanced appearance of detections making it easier to parse at a glance

This means you can explore detection data more thoroughly directly through the web interface, without needing to make API calls for basic queries.

Getting Started with the Beta

The new beta API version is now available for testing. We encourage you to explore these new features and share your feedback with us. Your input during this beta phase will help us refine the API before an eventual stable release that we think will come early next year.

You can begin using it by supplying &ver=20-November-2025 with your requests to the /v3/ endpoint, or you can select it from the v3 API dropdown selector within the customer dashboard. If you've set your API version to Latest Stable Version you'll already be using the new release.

We've also updated the API documentation page and test console.

Questions or Feedback?

If you have any questions about the new beta features or encounter any issues during testing, please don't hesitate to reach out to our support through the contact us page and as always, have a great weekend.


18th of November 2025 Outage

Image description

Today we experienced the longest contiguous downtime in our services 9 year history, lasting around 3 hours. The cause was a worldwide outage of the CloudFlare content delivery network of which we are a customer. They have an incident report you can read here.

EDIT:// CloudFlare have now also posted a blog post going into more detail which you can read here.

First of all we would like to apologise for this downtime, we truly believe we have done everything we can to mitigate downtime but eventually there is a single point of failure somewhere and for us that is CloudFlare. Whether we run our own DNS servers, nameservers or even own and operate our own IP addresses and autonomous networks eventually you have to rely on a 3rd-party somewhere that has the potential to go down.

The reason that we chose CloudFlare to be our sole single point of failure is because the vast majority of our own customers use CloudFlare. Based on the metrics we have around 80% to 95% of the websites that utilise our API are using CloudFlare. And so this means if there is a CloudFlare outage, it's likely our own customers are also experiencing the same outage of their own websites and so this reduces the impact of our downtime.

We're one of the millions of websites that went down today including OpenAI, Spotify, Uber, Twitter/X and even Downdetector.

There are ways in which we could utilise multiple content delivery networks, for example we could use Microsoft Azure CDN or Amazon AWS CloudFront alongside CloudFlare, both of which also experienced hours-long downtimes in recent weeks. But this approach of using multiple CDN's at once simply moves the single point of failure higher up the chain, at the load balancer level that chooses which CDN your traffic is handled by. If this were to have an outage instead of CloudFlare then our outage would not coincide with our customers who use CloudFlare and thus have a larger impact.

We made all of these considerations and researched our options the last time we had a major CloudFlare outage which lasted 38 minutes in 2019. We thought utilising multiple CDN's would be a simple solution and we did even trial some solutions but ultimately we saw that we were just trading one single point of failure for another and the impact on our customers would be larger if we didn't make just CloudFlare our single point of failure.

The reason we're writing this blog post with the above detailed explanation behind our reasoning is because we do want to explain not only why we were down but what lead to the decisions that resulted in us choosing CloudFlare in the first place and more specifically why we know they're our single point of failure and yet we maintain having them in that position within our infrastructure, the impact on our customers specifically is the lowest with CloudFlare of all the other options available.

We're sure that CloudFlare will make a blog post of their own going into specific detail about this outage and what they'll change in the future. We will update our own blog post here with a link to their explanation at that time. EDIT:// That blog post is now available here.

If you would like to get in touch with us for any reason please feel free to use the contact page. Thanks for reading and have a great week.


Upcoming Price Increase

Image description

Today, we've added notices on the pricing page and the customer dashboard to make everyone aware that we will increase the prices of our subscription plans in January 2026. We wanted to give everyone a two-month heads-up about this, so if you were planning to make a purchase you have enough time to think it through before the new prices come into effect.

We won't be discussing what the new prices will be in this post, but we do want to make clear that these prices only apply to newly started plans and alterations to existing plans (meaning you upgrade or downgrade the plan you already have active).

So if you're already subscribed, nothing changes for you; the price you've been paying will remain the same. But if you upgrade or downgrade your plan after January 2026, you will face the higher prices, so if you were planning to alter your plan, we recommend doing it before the new pricing comes into effect.

For those who are not subscribed and have instead been paying manually either via PayPal or Crypto, we will continue to honour the pricing you have been paying as long as you're still renewing the plan before it ends, and we may offer some grace period beyond that at our own discretion.

We know that pricing increases suck, and that is partly why we've kept most of our prices the same since 2020. For example, the starter plan pricing beginning at $2.99 USD has remained at that price since January 2020. That plan has offered great value, and it will continue to do so after it increases in price from January 2026.

If you're wondering why we're increasing prices, as you probably know, things have gotten more expensive since 2020. Whether that's advertising, software licenses, hosting or energy, our costs have gone up in all these areas, and while we did bake in a healthy margin at the start, the fact is, with inflation being very high over the past several years we now need to adjust our pricing to cover our costs.

And since we do not increase our prices for our current customers who are holding active subscriptions, it does mean we need to predict the future and adjust pricing ahead of our needs. However this does mean our pricing strategy has a benefit for our customers, loyalty is rewarded through our fixed prices, whether you stay subscribed for a few months or many years, the price you paid at the beginning is the same price you pay now. New customers are not getting a better deal than you.

So that's the update for today, just to reiterate, the pricing changes will come into effect in January 2026 and only apply to newly started or altered subscriptions after that date. Any subscription started today will remain on the current lower pricing while your subscription remains active.

Thanks for reading, and have a wonderful weekend.


New Server Day!

Image description

If you checked out our status page yesterday, you may have seen the display below, where we've highlighted three new servers.

Image description

And for the really observant, you may have noticed that NOVA and VEGA have been removed and in their place are NEON and VELA. Before we get into that, let's just detail the new European server, ERIS.

A few years ago, we introduced four extremely high-performance servers to our European cluster, and we consider Europe to be our "core" where other areas (North America, Asia, etc) fall back on. And it also serves our African traffic in addition to Europe.

So, having a lot of resources there is important, as the service has grown. Europe remains our highest source of daily queries, and so by adding the ERIS server node to that region, we're increasing total capacity by 25%. This also raises per-second request limits in the region. We didn't need to do this but it does increase our redundancy margin.

Now, let's discuss what happened to NOVA and VEGA. When we added them earlier this year, we used a new host that we hadn't used before in North America, and initially, it was great. The performance was consistent and high, and the reliability was also strong for both the hardware and the network.

But as time has gone on, we've been having more and more network connectivity problems and hardware issues, including data corruption. In fact, we had to make a status message about this on September 22nd, where the dashboard was failing to load for 25% of our users in North America. This was due to the VEGA server having corrupted files caused by either bad memory or failing storage.

After we resolved that, the issue came back a few days ago and this time not only on VEGA but on NOVA too. At this point, we decided to cut our losses, give up the servers and purchase new ones from a provider we've had more than a decade of experience with. The costs are higher, but reliability is something we're not willing to compromise on.

But these are not just replacements, these new servers are upgrades over NOVA and VEGA, with more cores, faster storage and a better network with lower latency and more bandwidth.

One thing we wanted to mention, we do not use hyperscalers like Amazon AWS, Microsoft Azure or Google Cloud. We find their performance lacking, prices too high and as seen earlier this week with Amazon AWS having a worldwide outage putting all your trust in a single provider can lead to catastrophe.

We use multiple hosts and many different geographically seperated data centers, even for the same service region, we also treat our server hosts like commodity providers which means we don't rely on their special-features, we build and maintain our own systems which lets us design in reliability from the beginning. We mentioned above how VEGA suffered data corruption and had to be removed from the cluster while it underwent repairs. The impact to the service was minimal and in-fact it was kicked from the cluster automatically which alerted us to the file corruption problem before it became an issue. Resiliency by design is how we've built our service from day one.

So that's the update for today. We've not always discussed our hardware changes, but we felt that with us adding three new server nodes in a single day, it warranted an explanation. We're also planning for the future; we intend to replace most, if not all, of our nodes in 2027, and we're targeting a 50% CPU performance uplift with those upgrades.

Now that is still two years away, so you may see us add some newer servers before then, but our focus is to not go above 5 servers per region if we can help it, so that means replacing older servers with newer ones over time that can handle more traffic.

Thanks for reading, and have a wonderful weekend!


v3 API october development update

Image description

It's time for another update on our progress of the v3 API and we've got our first major output format changes and a new dated release version.

Within the dropdown selector for the v3 API in your dashboard you'll see a new dated version called 10th of October 2025. In this release we altered how absent data is displayed, previously in the 12th of August 2025 release we placed a string called "Unknown" in the values of keys which lacked data.

We know that this was not optimal and after receiving feedback from customers we changed this to show null instead which is the more acceptable way to show absent values for keys especially when using the JSON API format that our API utilises.

The second change we've made is the last_updated key in the result format near the bottom used a Unix epoch, we have now changed this to an ISO 8601 UTC format which is still precise to the second like before but is also human readable.

Since both of these changes are breaking data formatting we issued a new release so those who have been testing the new v3 API in production can transition at their own pace.

One other thing we wanted to discuss, we recently discovered a bug in the client-side CORS example Javascript code available for both our v2 and v3 API's. The bug caused the Javascript code placed on your website to always trigger (and thus detect your website visitor as a proxy) when the API responded with a warning status code. The bug was caused by an operator precedence issue and we've since updated the CORS code examples.

However since most users will not check to update their code we've made the decision to alter all versions of our API so that when you make a CORS request and you will have received a warning message we'll change these to show an ok message instead. This change only affects two kinds of warning messages and only for CORS requests, those are:

  1. You're within 10% of your query allowance for the day
  2. You've gone over your query allowance and a burst token has been consumed.

Just to re-iterate, this change will only occur if you're making a CORS (client-side) request. We will not change the status from warning to ok if you're making a server-side request.

We didn't make this decision lightly, we felt it was important enough to make sure customer websites didn't become inaccessible and these two warning messages are likely not being monitored by CORS implementors anyway as the messages are predomintly not logged and the API response is consumed by the browser of the visitor to your website.

So that's the updates for today, thanks for reading and have a wonderful weekend.


v3 API september development update

Image description

Since our last update we've been hard at work optimising the v3 API, improving performance by lowering the average latency. We've also been fixing bugs, improving the risk score and detection systems. In short it's more accurate and performative since we last spoke.

The adoption rate of the v3 API has been much higher than our projections, we expected a very slow ramp up considering there's no third-party libraries for it yet and we specifically said it's an open beta and there may be bugs. Even so we've seen heavy deployment by CORS users (client-side Javascript implementations of our API). In hindsight this should be expected since we updated our Javascript example code to use the v3 API already.

In addition to that we've also seen some of the game-server plugins and WordPress plugins update to the v3 API which has brought thousands of end-users to the v3 API very quickly. We're very thankful to all the developers who took time out of their schedules to do this and we are taking the stability of v3 very seriously as a result.

To help with the transition to the new API we've updated our API documentation again, we added a new section called API Responses which goes into greater detail about the new response format and what you can expect will and won't change between queries.

In our last blog post we detailed how we had updated our lookup pages to use the new v3 API and vastly revamped its interface. We've been migrating more of our own services to v3 and that has been instrumental in uncovering edge cases in the API that we weren't happy with and improving those, especially around the risk score and staggered detection types but also assigning values to keys when data is missing or lacking in fidelity.

The last thing we wanted to mention is the PHP library we maintain for the service. As mentioned in our launch post we were going to update the library to support the v3 API and two weeks ago we did just that. In the past day though we updated the library again to correct some issues and add HMAC (Hash-based Message Authentication Code) support for improved security. For those unaware this is a feature that lets you verify the API responses haven't been tampered with by comparing SHA-256 hashes of the payload using a pre-shared key from your customer dashboard.

You can find the updated library on github here and we've also created an upgrade guide here as this new library is incompatible with the previous library and so requires you to update some of your own code. The major change is that we've moved from numeric values to booleans for the options array and response array. There's also a lot more options available so you can be more granular in what you detect.

We're still on schedule for a stable release of the v3 API, we're getting a lot done including a milestone with regards to performance. In our testing we're now consistently seeing 3ms answer times from the v3 API, this is partly due to its new data caching architecture so as more users transition to v3 we're seeing higher cache hit rates for data and that improves latency. We're also seeing a large improvement for batched queries, comparing v2 to v3 we're 60% faster at delivering an answer for a request containing 1,000 addresses which is an incredible speed improvement.

So that's the update for this month, we'll do another when there's a lot of new things to discuss. Thanks for reading and have a wonderful rest of your week!


New Lookup Pages

Image description

Today we've launched a redesign of our lookup pages. These new pages feature more information than previously shown in a denser way, with a better layout, new iconography, data accessibility improvements and a focus on the address information that matters most to our customers.

One thing we did a few years ago is we launched operator data on the API and with it we added operator cards to the lookup pages. We really liked the design of these cards and wanted to expand the entire look of the page to match them. That is where today leads us, now all the colors and boxes are uniform to the operator you're looking at. Below is a screenshot showing the new updated operator cards in two brand colors green and red.

Image description

In addition to these we now have generic summaries for when we don't have an operator card available and they look like the image below.

Image description

All the label colors you're seeing in these screenshots are based on either the operators brand colors or if one isn't available the type of detection we've made. So you may see red, orange or blue as generic color choices based on detection types, thus not every result will appear in red.

Below the summaries we now have a new detection section like shown below with large icons representing the various detection types. Since this page uses our new v3 API we can display multiple detections simultaneously for the first time and it also means operators of an IP will be much more consistently shown at the top of the pages, something our v2 API struggled with due to how it could only display a single detection type.

Image description

One thing we wanted to emphasize on the page is what our customers come to the page for, to check if an IP is anonymous or not. And so we removed the large encompassing map from the top of the page and have instead placed a smaller square for the map on the middle-right location section of the page as shown below. We've also added a new network section to the left, we seperated these two panels visually to make it easier to browse.

Image description

One other change you may notice is we've switched to Google Maps for the embedded map. We feel this is more likely the map brand of choice for our customers and so that's why we switched. Clicking on the Coordinates link or the map itself will take you straight to Google Maps for the location shown.

And finally we've redesigned our attack log display as shown below. This feature is something we're going to be refocusing on later as we are considering how to bring this data to the v3 API and as such right now it's an on-page exclusive feature.

Image description

One last thing we wanted to discuss is accessibility. We know that a lot of customers use the lookup page to gather information that they then copy and paste into documents, emails and other off-site tools. We wanted to make this easier and so we've added hidden copy-to-clipboard buttons next to every label which reveal themselves on-hover of your mouse cursor for every piece of informaiton shown on the page making it very quick to copy the data you need.

In addition to that we spent a lot of time making sure the page looks nice and works well on small-screened devices like tablets and phones without sacrificing the information density for desktop and laptop users. The previous lookup page was very difficult to use on mobile due to the side-by-side table layout and we've now done away with that for a mobile-friendly vertical grid system instead.

So that's the update for today, if you would like to see a live example of the new pages you can click right here. Thanks for reading and have a wonderful week!


Back