New dated release of the v3 API

Image description

We're excited to announce a major update to our ongoing v3 API beta, released today this version introduces some major new features and enhancements, as a result we've issued a new dated version which is selectable from within Customer Dashboards, or by supplying &ver=11-February-2026 with your requests to our v3 API endpoint.

What's New

Detection History Tracking

Our API now includes detection_history, providing you with a complete timeline of an IP addresses listing status. This feature shows:

  • Whether the listing is currently live or has been delisted
  • Precise timestamps in ISO 8601 format indicating when entries were or will be delisted

This historical context helps you understand the lifecycle of positive detections and we have a new query flag to view historical data.

The &history=1 flag

When you provide this flag we will show you the positive detections we've had for this address no matter how long ago they were. You can also combine this with the &days= flag to restrict the history between now and a point in the past you care about.

For example if you supply &history=1&days=7 we will show you the full positive detection history of an IP over the past 7 days even if we've delisted it from our live results. Without the &history=1 flag the &days=7 may be disregarded if we've already delisted an IP due to us not being confident about the determination anymore. We may lose confidence in a listing due to its age, this is because the listings become stale and thus inaccurate unless the data is refreshed, if we've not seen an IP acting maliciously in a while it's likely safe again and thus needs to be delisted.

In the past we allowed the &days= flag to act like a history flag, but due to misuse by customers we restricted how far back it can go to no further than our delist times for the data we hold, this &history=1 flag brings back the capability to look further back beyond our delist times whilst maintaining the metadata you need regarding the addresses delisting time so you know whether we've delisted the data before you make a final decision.

Attack History & Classification

The "new" attack_history feature delivers comprehensive intelligence about recent malicious activity. I put new in quotes because this is actually a feature from our v2 API that we've brought to the v3 API. It's simpler, more standardised and thus easier to parse programmatically. For each IP address, you'll receive:

  • A list of the attack types the IP has been observed partaking in
  • Numbered counts showing the frequency of each attack type

Gone from the v2 API is a numbered total, you can easily do this in your own programming language so it felt redundant to include in this version. This granular attack data should enable you to assess threat severity and be more granular about the addresses you allow to access your sites and services.

Enhanced Operator Intelligence

We've significantly improved the operator information section with two key enhancements:

Specific Service Classification

We now identify the exact services provided by IP address operators as an array, which means you may receive one or more of these services in the new services section of the operator data:

  • Residential proxies
  • Wireless proxies
  • Datacenter proxies
  • Residential VPNs
  • Datacenter VPNs
  • Web scraping
  • And more

Cross-Operator Detection and Display

The new additional_operators key reveals other operators we've observed controlling the same IP address. This multi-operator insight helps you:

  • Identify IP addresses with shared infrastructure
  • Detect operator networks and relationships
  • Understand when an address is experiencing broad abuse

We will always put the most common operator we see as the main/primary operator with the others as additional operators. The reason we've added this section at all is because our huge increase in residential proxy detection has revealed a lot of shared infrastructure amongst these organisations.

To put it simply, they're sharing the same pool of compromised computers and we want to show that in our data so you can more easily identify addresses you should always keep blocked, the most abused ones.

We've also updated our lookup pages to show additional operators in the top operator card section.

Some API output format changes

The biggest change regarding the API format is that the operator section will now always be present for every IP but if no operator is available you'll receive an operator: null result. This is the same for our new detection_history and attack_history sections, if data doesn't exist they'll say null too.

We've done this to keep the API response consistent and to make developers aware that there may be additional data presented even if the IP they're checking right now doesn't feature it.

Getting Started

This new dated version of the API is available right now as we mentioned at the top you can select it from within the Customer Dashboard or by supplying &ver=11-February-2026 with your requests to our v3 API endpoint. Thanks for reading and have a wonderful week!


End of January Platform Updates

Image description

Enhanced Proxy Detection and Strengthened Security

We're excited to share two significant updates that improve both the accuracy and security of our platform.

10x Increase in Residential Proxy Detection Coverage

We've dramatically increased our residential proxy address collection, specifically we're now collecting 10 times the volume of proxy addresses per day compared to what we were doing a week ago. This substantial increase will have a direct correlation on proxy detection rates and we're not done, we intend to further increase our detection rate of residential proxies in the coming weeks as we tune the infrastructure we've built to tackle this problem.

Improved Accuracy Through Reduced Display Time

With this increased data collection rate, we're now encountering the same residential proxy addresses multiple times within a 24-hour window. This repeated observation has enabled us to make an important optimization: we've reduced the average display time of results on our API from 48 hours to 24 hours for residential proxies.

This change delivers a meaningful reduction in false positive rates. Here's why: when we observe a proxy address multiple times in quick succession, we can confidently extend its active status. Conversely, addresses that appear only once or twice and then disappear are more likely to be outliers—IP addresses that have ceased operating as proxy servers. By tightening our display window to 24 hours, we can more effectively filter out these outliers while maintaining coverage of genuinely active proxy infrastructure.

The net result is fresher, more accurate data that better reflects the current state of residential proxy networks and a reduction in the support tickets made by your customers to request their addresses be allowed to access your services.

Comprehensive Security Infrastructure Audit & Upgrades

In our ongoing commitment to protecting our platform, we've performed an internal security audit and implemented a suite of security protocols:

DNS and Email Security Enhancements

DNSSEC (Domain Name System Security Extensions): We've deployed DNSSEC to protect against DNS spoofing attacks, ensuring that users connecting to our platform are actually reaching our legitimate servers, not malicious imposters.

CAA (Certification Authority Authorization): By implementing CAA records, we've explicitly specified which certificate authorities are authorized to issue certificates for our domain, reducing the risk of fraudulent certificate issuance.

MTA-STS (Mail Transfer Agent Strict Transport Security): This protocol enforces encrypted SMTP connections for email delivery, preventing man-in-the-middle attacks on our email communications.

Enhanced SPF (Sender Policy Framework) Records: We've refined our SPF configuration to better prevent email spoofing and improve our email deliverability and authenticity.

TLS-RPT (TLS Reporting): We've enabled TLS reporting to monitor the effectiveness of our email security measures and quickly identify any delivery issues or potential security incidents.

Automatic Email Replies: Not really security related but we noticed in our audit we do not automatically reply to a customer email to let you know we received your mail and to expect a reply from us. We now send these so you can be sure your mail reached us successfully.

What This Means for You

These security enhancements work together to create multiple layers of protection for our platform and your data. From preventing domain hijacking to securing email communications, these protocols represent industry best practices in modern web security.

Looking Forward

These updates reflect our dual commitment to delivering accurate, timely threat intelligence while maintaining the highest security standards. As residential proxy networks continue to evolve, we'll continue investing in both our detection capabilities and our infrastructure security.

If you have questions about these updates or how they impact your use of our platform, please don't hesitate to reach out to our support. Thanks for reading and have a wonderful week!


Asian Infrastructure Expansion

Image description

Today we've deployed a new server in Singapore to double the capacity and redundancy of our Asian infrastructure. We're doing this because last month we had an ill-timed fault with our only Asian server which we weren't able to resolve for about 10 days and so we wanted to expand our Asian infrastructure so that our customers don't face higher latency if the same thing were to occur again and also if we're performing scheduled maintenance.

You may have seen in the news recently that the price of memory and to a lesser extent storage has increased in price due to demand far exceeding worldwide supply. This has had an impact on our urgency to add an extra server before the pricing became completely unreasonable, already this server while being the same specification as our other Singaporean server will cost us twice as much every month and it had a significant up-front allocation fee when in the past no fee was imposed.

In addition, while Singapore is a great place to host servers as it has great network connectivity to surrounding countries and a stable government it has some of the highest square-meter land costs in the world and as a result datacenter space is sold at an extreme premium. We are looking to pay eight times higher prices for servers in Singapore when compared to India and three times higher costs compared to Europe. So us having multiple servers there is a significant monetary investment but one we know will benefit our growing customer base.

So that's the update for today, the new server called PLUTO is live and accepting requests right now. As always, thanks for reading and have a wonderful week.


New Pricing Live

Image description

As we mentioned in this blog post on November 1st we would be increasing our prices in January 2026, and as it's the first day of January we decided to do it today. We also wanted to note that we kept large disclaimers available for two months on both our pricing page and customer dashboard, explaining how to lock in the lower prices ahead of today. This way, everyone was well-informed about the upcoming price increase and could plan accordingly.

Every price of every plan we offer has increased however the percentage of the increase differs between plans with the largest proptional increase in price focusing on our Business and Enterprise plans. For our most popular Starter plans we've increased the prices by only $1 which is effectively a 33% to 20% increase depending on the Starter plan.

This means our previous lowest plan, which has been $2.99 since January 2020, is now $3.99. We think this still offers incredible value and we've not changed any plans quantity of daily queries, custom rules, custom lists or burst tokens. Same features, same service just at a new price.

For our Pro plans we've raised the first plans price from $6.99 to $9.99 and we like the idea of seperating each of our three Pro plans by $5 instead of the previous $3.

As mentioned, the largest increase comes to our Business and Enterprise plans. The first Business plan which used to be $19.99 is now $29.99 which represents a 50% increase. Similarly, the last Buisiness plan, which was $49.99 is now $99.99, which represents a 100% increase in price. We've done this because the cost to deliver this plan (2.56 Million daily queries) is substantial and we want to be able to purchase more powerful servers to better serve the high capacity needs of our largest customers.

As we move to the Enterprise plans, the first plan has also seen a 100% increase from $99.99 to $199.99 and that's because we're sticking to a linear cost-per-query structure beginning at the 1.28 Million daily queries mark (Business plan, $49.99 per month). So for every 1.28 million daily queries, it'll cost $49.99, which is why the 5.12 Million plan is now $199.99 (4 x $49.99 essentially plus a few cents) and our largest 10.24 Million plan is now $399.99, up from $199.99. We do want to point out that we're one of the few SaaS businesses in the industry that has transparent and available enterprise pricing right on the webpage. We don't do "call us for pricing" sales tactics that lead to upselling.

Now as always, when we increase our prices, they only apply to newly started or altered plans. So the subscription you already hold will not change in price, you will continue to pay your existing lower price until you cancel your plan. We've also updated the plan section area within the Dashboard to clearly show that you're subscribed to a plan at a lower price than currently offered so it's not confusing.

We know that nobody likes price increases, we've been able to keep our prices the same since 2020 but unfortuantely the higher costs of everything, including energy, infrastructure, advertising and more, have meant we needed to increase our prices. Thankfully, we have a lot of plans, which means we can offer pricing that fits exactly what you need to save you the most amount of money and we've been able to protect our lowest cost starter plans, keeping them very competitively priced, which maintains the accessibility of our service.

And of course, we still offer our full-featured free plan that has complete access to the exact same API results as our paid plans with 1,000 daily queries. We're fully committed to our free plan and we have no plans to stop offering it or degrade it in any way.

Thanks for reading, and have a wonderful new year.


Introducing Our New Batch Lookup Web Interface

Image description

Our Brand New Web Interface: Speed Meets Simplicity

We're thrilled to unveil a completely redesigned web interface for proxycheck.io, now available at https://proxycheck.io/web/. This isn't just a visual refresh it's a ground-up reimagining of how you perform bulk lookups, combining a modern user interface with class-leading performance and full v3 API compatibility.

A Fresh, Intuitive Design

The new interface features a clean, modern design that makes checking IP addresses more intuitive than ever. Here's what the main overview of results looks like:

Image description

This new pane interface shows more addresses in a denser way while maintaining access to important controls like copying. New with this interface is an expand feature which will pop out a pane into a larger modal.

Rich Contextual Information

We've added intelligent tooltips throughout the interface that provide detailed explanations without cluttering the main view. Hover over any element to reveal comprehensive information including the specific detection types, location and network data.

Image description

Since our new v3 API supports multiple detection types for a single address we felt it was important to include the colourful detection icons within this context tooltip so that you can identify at a glance all the various detections we've made about a single address.

Detailed Results Breakdown

Below the main overview though you'll find an expanded results section that provides granular details about each address you've checked:

Image description

These detailed breakdown panes will change size depending on how many varied results you're receiving to maximise the viewable area of the page. They are also color-coded for glancing and of course the colors and icons match those found within the contextual tooltips for visual coherencey.

Unprecedented Performance

Beyond the visual improvements, the new interface delivers exceptional performance that sets a new standard for bulk address checking and we've also increased the maximum amount of addresses you can check in a single request through the interface from 10,000 to 20,000.

  • Check up to 20,000 addresses in a single request
  • Check 1,000 addresses in just 200 milliseconds
  • Check 10,000 addresses in under 1.7 seconds

Don't just take our word for it, below is a real-world performance example checking 10,000 varied addresses:

Image description

This blazing-fast performance means you can validate large lists of addresses in real-time, without the traditional waiting you may have experienced with our previous web interface that utilised our older v2 API.

Try It Now

The new web interface is live and ready to use at https://proxycheck.io/web/. We invite you to experience the speed and simplicity for yourself.

As always, we'd love to hear your feedback. Let us know what you think of the new design and how it's working for your use cases. Your input helps us continue improving the service. Thanks for reading and have a wonderful week!


v3 API November development update

Image description

New API Beta: Enhanced Detection Insights

We're excited to announce the release of our new API beta version, dated November 20th, 2025. This update brings new responses that give you deeper insights into detection results and help you make more informed decisions. This is our third dated version of the v3 API as the beta continues.

What's New in the Beta

Confidence Scores

The headline feature of this release is the addition of confidence scores for all detections. Every detection now includes a confidence value ranging from 0 to 100, indicating how certain the API is about its findings. This numerical score gives you immediate insight into the reliability of each detection, allowing you to:

  • Filter results based on confidence thresholds
  • Prioritize high-confidence detections for immediate action
  • Flag low-confidence results for manual review
  • Make more nuanced decisions based on detection certainty

The score is heavily weighted towards recent detections meaning as the time grows between now and our last detection the confidence score will reduce. This allows us to present results for a longer time period because the confidence score will help you decide how to action the result.

One thing to note about this release, because of the new confidence score some data that was previously expired quickly will linger for a much longer time. Due to this we're going to be hiding results on the API if you don't supply an &days= flag with your requests so that we can provide a safe default.

As an example, residential proxy results typically expire within 48-hours but you can keep them visible for much longer by supplying &days=14 which would keep them displayed for 14-days whilst the confidence score will degrade the further out from our latest detection.

Temporal Detection Data

We've also added two new fields to the detection section that provide important temporal context:

  • first_seen: When the detection was initially identified
  • last_seen: The most recent occurrence of the detection

These timestamps will help you understand the timeline of detections and track persistence.

Lookup Page Updates

We've also upgraded our lookup pages to take full advantage of the new API version. The interface now displays:

  • Confidence scores for all detections
  • First seen and last seen timestamps
  • Enhanced appearance of detections making it easier to parse at a glance

This means you can explore detection data more thoroughly directly through the web interface, without needing to make API calls for basic queries.

Getting Started with the Beta

The new beta API version is now available for testing. We encourage you to explore these new features and share your feedback with us. Your input during this beta phase will help us refine the API before an eventual stable release that we think will come early next year.

You can begin using it by supplying &ver=20-November-2025 with your requests to the /v3/ endpoint, or you can select it from the v3 API dropdown selector within the customer dashboard. If you've set your API version to Latest Stable Version you'll already be using the new release.

We've also updated the API documentation page and test console.

Questions or Feedback?

If you have any questions about the new beta features or encounter any issues during testing, please don't hesitate to reach out to our support through the contact us page and as always, have a great weekend.


18th of November 2025 Outage

Image description

Today we experienced the longest contiguous downtime in our services 9 year history, lasting around 3 hours. The cause was a worldwide outage of the CloudFlare content delivery network of which we are a customer. They have an incident report you can read here.

EDIT:// CloudFlare have now also posted a blog post going into more detail which you can read here.

First of all we would like to apologise for this downtime, we truly believe we have done everything we can to mitigate downtime but eventually there is a single point of failure somewhere and for us that is CloudFlare. Whether we run our own DNS servers, nameservers or even own and operate our own IP addresses and autonomous networks eventually you have to rely on a 3rd-party somewhere that has the potential to go down.

The reason that we chose CloudFlare to be our sole single point of failure is because the vast majority of our own customers use CloudFlare. Based on the metrics we have around 80% to 95% of the websites that utilise our API are using CloudFlare. And so this means if there is a CloudFlare outage, it's likely our own customers are also experiencing the same outage of their own websites and so this reduces the impact of our downtime.

We're one of the millions of websites that went down today including OpenAI, Spotify, Uber, Twitter/X and even Downdetector.

There are ways in which we could utilise multiple content delivery networks, for example we could use Microsoft Azure CDN or Amazon AWS CloudFront alongside CloudFlare, both of which also experienced hours-long downtimes in recent weeks. But this approach of using multiple CDN's at once simply moves the single point of failure higher up the chain, at the load balancer level that chooses which CDN your traffic is handled by. If this were to have an outage instead of CloudFlare then our outage would not coincide with our customers who use CloudFlare and thus have a larger impact.

We made all of these considerations and researched our options the last time we had a major CloudFlare outage which lasted 38 minutes in 2019. We thought utilising multiple CDN's would be a simple solution and we did even trial some solutions but ultimately we saw that we were just trading one single point of failure for another and the impact on our customers would be larger if we didn't make just CloudFlare our single point of failure.

The reason we're writing this blog post with the above detailed explanation behind our reasoning is because we do want to explain not only why we were down but what lead to the decisions that resulted in us choosing CloudFlare in the first place and more specifically why we know they're our single point of failure and yet we maintain having them in that position within our infrastructure, the impact on our customers specifically is the lowest with CloudFlare of all the other options available.

We're sure that CloudFlare will make a blog post of their own going into specific detail about this outage and what they'll change in the future. We will update our own blog post here with a link to their explanation at that time. EDIT:// That blog post is now available here.

If you would like to get in touch with us for any reason please feel free to use the contact page. Thanks for reading and have a great week.


Upcoming Price Increase

Image description

Today, we've added notices on the pricing page and the customer dashboard to make everyone aware that we will increase the prices of our subscription plans in January 2026. We wanted to give everyone a two-month heads-up about this, so if you were planning to make a purchase you have enough time to think it through before the new prices come into effect.

We won't be discussing what the new prices will be in this post, but we do want to make clear that these prices only apply to newly started plans and alterations to existing plans (meaning you upgrade or downgrade the plan you already have active).

So if you're already subscribed, nothing changes for you; the price you've been paying will remain the same. But if you upgrade or downgrade your plan after January 2026, you will face the higher prices, so if you were planning to alter your plan, we recommend doing it before the new pricing comes into effect.

For those who are not subscribed and have instead been paying manually either via PayPal or Crypto, we will continue to honour the pricing you have been paying as long as you're still renewing the plan before it ends, and we may offer some grace period beyond that at our own discretion.

We know that pricing increases suck, and that is partly why we've kept most of our prices the same since 2020. For example, the starter plan pricing beginning at $2.99 USD has remained at that price since January 2020. That plan has offered great value, and it will continue to do so after it increases in price from January 2026.

If you're wondering why we're increasing prices, as you probably know, things have gotten more expensive since 2020. Whether that's advertising, software licenses, hosting or energy, our costs have gone up in all these areas, and while we did bake in a healthy margin at the start, the fact is, with inflation being very high over the past several years we now need to adjust our pricing to cover our costs.

And since we do not increase our prices for our current customers who are holding active subscriptions, it does mean we need to predict the future and adjust pricing ahead of our needs. However this does mean our pricing strategy has a benefit for our customers, loyalty is rewarded through our fixed prices, whether you stay subscribed for a few months or many years, the price you paid at the beginning is the same price you pay now. New customers are not getting a better deal than you.

So that's the update for today, just to reiterate, the pricing changes will come into effect in January 2026 and only apply to newly started or altered subscriptions after that date. Any subscription started today will remain on the current lower pricing while your subscription remains active.

Thanks for reading, and have a wonderful weekend.


New Server Day!

Image description

If you checked out our status page yesterday, you may have seen the display below, where we've highlighted three new servers.

Image description

And for the really observant, you may have noticed that NOVA and VEGA have been removed and in their place are NEON and VELA. Before we get into that, let's just detail the new European server, ERIS.

A few years ago, we introduced four extremely high-performance servers to our European cluster, and we consider Europe to be our "core" where other areas (North America, Asia, etc) fall back on. And it also serves our African traffic in addition to Europe.

So, having a lot of resources there is important, as the service has grown. Europe remains our highest source of daily queries, and so by adding the ERIS server node to that region, we're increasing total capacity by 25%. This also raises per-second request limits in the region. We didn't need to do this but it does increase our redundancy margin.

Now, let's discuss what happened to NOVA and VEGA. When we added them earlier this year, we used a new host that we hadn't used before in North America, and initially, it was great. The performance was consistent and high, and the reliability was also strong for both the hardware and the network.

But as time has gone on, we've been having more and more network connectivity problems and hardware issues, including data corruption. In fact, we had to make a status message about this on September 22nd, where the dashboard was failing to load for 25% of our users in North America. This was due to the VEGA server having corrupted files caused by either bad memory or failing storage.

After we resolved that, the issue came back a few days ago and this time not only on VEGA but on NOVA too. At this point, we decided to cut our losses, give up the servers and purchase new ones from a provider we've had more than a decade of experience with. The costs are higher, but reliability is something we're not willing to compromise on.

But these are not just replacements, these new servers are upgrades over NOVA and VEGA, with more cores, faster storage and a better network with lower latency and more bandwidth.

One thing we wanted to mention, we do not use hyperscalers like Amazon AWS, Microsoft Azure or Google Cloud. We find their performance lacking, prices too high and as seen earlier this week with Amazon AWS having a worldwide outage putting all your trust in a single provider can lead to catastrophe.

We use multiple hosts and many different geographically seperated data centers, even for the same service region, we also treat our server hosts like commodity providers which means we don't rely on their special-features, we build and maintain our own systems which lets us design in reliability from the beginning. We mentioned above how VEGA suffered data corruption and had to be removed from the cluster while it underwent repairs. The impact to the service was minimal and in-fact it was kicked from the cluster automatically which alerted us to the file corruption problem before it became an issue. Resiliency by design is how we've built our service from day one.

So that's the update for today. We've not always discussed our hardware changes, but we felt that with us adding three new server nodes in a single day, it warranted an explanation. We're also planning for the future; we intend to replace most, if not all, of our nodes in 2027, and we're targeting a 50% CPU performance uplift with those upgrades.

Now that is still two years away, so you may see us add some newer servers before then, but our focus is to not go above 5 servers per region if we can help it, so that means replacing older servers with newer ones over time that can handle more traffic.

Thanks for reading, and have a wonderful weekend!


v3 API october development update

Image description

It's time for another update on our progress of the v3 API and we've got our first major output format changes and a new dated release version.

Within the dropdown selector for the v3 API in your dashboard you'll see a new dated version called 10th of October 2025. In this release we altered how absent data is displayed, previously in the 12th of August 2025 release we placed a string called "Unknown" in the values of keys which lacked data.

We know that this was not optimal and after receiving feedback from customers we changed this to show null instead which is the more acceptable way to show absent values for keys especially when using the JSON API format that our API utilises.

The second change we've made is the last_updated key in the result format near the bottom used a Unix epoch, we have now changed this to an ISO 8601 UTC format which is still precise to the second like before but is also human readable.

Since both of these changes are breaking data formatting we issued a new release so those who have been testing the new v3 API in production can transition at their own pace.

One other thing we wanted to discuss, we recently discovered a bug in the client-side CORS example Javascript code available for both our v2 and v3 API's. The bug caused the Javascript code placed on your website to always trigger (and thus detect your website visitor as a proxy) when the API responded with a warning status code. The bug was caused by an operator precedence issue and we've since updated the CORS code examples.

However since most users will not check to update their code we've made the decision to alter all versions of our API so that when you make a CORS request and you will have received a warning message we'll change these to show an ok message instead. This change only affects two kinds of warning messages and only for CORS requests, those are:

  1. You're within 10% of your query allowance for the day
  2. You've gone over your query allowance and a burst token has been consumed.

Just to re-iterate, this change will only occur if you're making a CORS (client-side) request. We will not change the status from warning to ok if you're making a server-side request.

We didn't make this decision lightly, we felt it was important enough to make sure customer websites didn't become inaccessible and these two warning messages are likely not being monitored by CORS implementors anyway as the messages are predomintly not logged and the API response is consumed by the browser of the visitor to your website.

So that's the updates for today, thanks for reading and have a wonderful weekend.


Back