Welcome to 2025 and our first big update of the year!

Image description

Happy New Year to everyone as we start 2025 with a big update. We'll not be listing anything we did last year in a round-up because you can scroll down to read everything that happened in our blog posts below and we certainly recommend you take a look at those!

What we want to talk about today is location data and our new location engine that was deployed today on the latest version of the API (November 2024). Over the past several years we've used a ping-based triangulation system to figure out where IP addresses are actually physically based.

It's a simple model where you have lots of servers all around the world and you have them ping the same address and based on the latency you can triangulate within a reasonable doubt where an address is physically located. This model has worked well for us up until now. Customers and ourselves have noticed some location drift especially over the past year.

To address these issues, we investigated the causes:

Firstly, we relied too much on one specific VPS (Virtual Private Server) provider for most of our test servers. This created problems where their fibre links to certain locations artificially decreased the latency response for addresses tested through them. Meaning sometimes they provided us with an extremely fast highway within which to drive and it altered the latency compared to the wider internet and when mixed with our other VPS providers created wrong results that we couldn't account for.

Secondly, we didn't have enough servers in general. To do triangulation properly and to increase accuracy when you get down to the city or postcode level it requires more servers, several in most cities.

Thirdly, anycast addresses where a single IP address is announced worldwide from multiple locations caused our limited number of servers to disagree about where an address specifically is located and as a result required our software to make a sometimes wrong determination based on conflicting test results.

And finally, pinging doesn't always work and isn't always the best. Sometimes you need to simply read 3rd party announcements where an ISP specifically says where an address is. Sometimes you need to perform a traceroute and trace the paths towards an address so you're checking the physical location of all the intermediary routes. Basically, metadata is important.

So to solve these issues we've vastly expanded our network of VPSs, we're now using more VPS providers so we're not overly influenced by the robustness of a single provider, and we're now spinning up and down VPSs as needed to increase our network size while keeping costs in check. We've switched to performing trace routes and gaining information from all addresses along a route to a specific address that we're interested in and we're looking at ISP-provided metadata.

The result of all this work is that the country detection specifically (which most of our customers care about when it comes to location information) is now incredibly accurate once again. When compared with market leaders whose main or only product is location data we're very competitive with them for both IPv4 and IPv6 location data.

Another improvement provided by the new location engine is fewer blank spots where we had no location data at all about an address which was the result of our prior ping approach with addresses that simply did not reply, the traceroute system combined with ISP metadata takes care of this and provides accurate location information for these prior unknowns.

So that's the update for today. Remember this is available only on our latest API version dated November 2024. If you've already set your API version to this in our dashboard or you have it set to the "Latest Version" then you already have the new location engine. If you would like to compare past and current results you can alter your API version to our previous release.

Thank you for reading, and welcome once again to 2025!


Introducing Device Estimates

Image description

Today we're introducing a new feature called device estimates which presents you with the estimated device count for specific addresses and their subnets based on actual data derived from the usage of our API by customers when supplying the &asn=1 flag with your requests.

By using these new data fields within the API (shown below) you'll be able to make decisions about whether to allow an IP to interact with your service based on how many devices are estimated to be active behind it which allows you to make a better risk assessment.

Image description

One thing we were very keen to maintain with this feature is user privacy. This is why we do not detail the exact devices being used behind an IP address and in fact we don't perform any kind of device fingerprinting as all of this data is gathered anonymously and our estimated number is based on number theory and not specific device tracking. This means we can still maintain accurate device estimates without impinging on user privacy.

The new device estimate feature is available now in the API, we've issued a new version dated the 19th of November 2024. This data is also exposed within the Custom Rule feature which means you can now build rules against device counts for both singular addresses and their subnets. We've also added device estimates to our individual threat pages.

In addition to presenting this data in the API we're also using it internally to help us discover previously undiscovered VPN services and proxy servers, we'll do another blog post on the results of this in the future.

That's the update for today we hope you'll take advantage of the new feature and thanks for reading!


Refreshing our status page

Image description

Today we've launched a new version of our status page designed to convey more relevant data to you and to make the status page itself more resilient and accessible in emergencies.

So first of all, the page has a brand new address, previously our status page was at proxycheck.io/status which meant it could potentially become inaccessible if our entire website were to be down. This has now been changed to status.proxycheck.io which as a sub-domain can be operated independently of our normal service cluster.

Image description

The second big change is we now show status history. The image above illustrates the new pill-style history graph showcasing the past 3-day status of our API in increments of one hour. Each pill can show multiple colors at once with the size of the color indicating the service status and how long that status occurred. When hovering your cursor over a pill you'll receive an interface similar to the one on the right below featuring current status, latency and any specific service messages.

Image description

On the left above you can also see smaller status panels for specific server nodes. If you view the new status page you'll actually notice that the most important statuses are at the top and shown larger with more visible history and as we get down to the less important things like individual service nodes we display them more densely.

You may also notice that some services not relevant to customers have been replaced on the new status page with more appropriate services such as email services and the Custom List downloader service.

One last thing to mention about the design is that all the displayed dates and times are localised to you as and when you view the page making it much simpler to determine when events occurred without needing to look at unfamiliar timezones.

Before we decided to make our status page (absolutely everything about this feature is custom) we looked at many commercial and open-source solutions and although many of them could accomplish what we needed none of them fit the design aesthetic of our website or they didn't display the exact information we needed in the way we wanted it shown.

That's why we chose to design this ourselves, the flexibility that building things yourself affords can not be overlooked and that extends to even small things like making sure the hover-tooltip stays on the screen when you get near a browser window edge which was something we found even some commercial status products didn't offer.

So that's the update for today we hope you enjoy the new status page and will bookmark it for your convenience and as always have a wonderful week!


Improvements to the API test console and Custom List storage increase

Image description

Today we've made two changes to the service the first of which helps developers get started with our API faster by expanding the test console found in our API documentation to actually generate a URL for you to query based on the flags you enable. Below is a screenshot showing the new interface, the purple section being completely new.

Image description

We've also removed the submit button that used to accompany the test console because it was redundant, instead we now dynamically update both the output example and the new URL generator section as you toggle flags on and off or change the type of request being tested from one of the supplied dropdowns.

The second change we've made today is we've increased the storage available for Custom Lists from 4MB to 8MB as illustrated below.

Image description

We've made this change because users are making use of larger and larger lists and we want to facilitate this usage. Some users have resorted to breaking up large lists into multiple lists which just seemed inefficient. We ran some test to determine the performance impact on the API and didn't see any degradation, we may increase individual list sizes again in the future but right now we felt 8MB struck the right balance.

So those are the updates for today we hope you're having a wonderful week and thanks for reading!


Operator Data Expansion

Image description

Since we introduced operator data to the API in December 2021 we've often been asked by customers to broaden the types of operators that we support and generally expand on the feature. To deliver on those requests we showed last year how we had been adding decentralised VPN operators and then a month later we integrated operator data into the positive detection log within customer dashboards.

Today we're improving operator data again by building operator profiles for scraping services. We've been monitoring many of these services since last year and we feel now is the right time to create distinct operator cards and expose operator data within our API for these organisations.

Image description

Above is one such card for Oxylabs which is one of the largest operators in the scraping and residential proxy selling space. You'll also find cards for their many competitors of all sizes.

Broadening the kinds of operators we list doesn't stop here; we will start to include datacenter hosts, residential proxy sellers, click farming services and more in future updates, we are committed to expanding our operator data with the rich cards like the one above and detailed and easy to parse data exposed through our API.

That's all for today, thanks for reading – we hope you have a great week!


Hash-based Message Authentication Code support added to the API

Image description

Today we've added a new feature to the latest version of our API called Hash-based Message Authentication Code or appreviated HMAC which makes it possible for you to verify our JSON payloads by hashing them and then comparing the resultant hash to the one supplied by us in a new header alongside our API results.

Below is how the shared key appears within the customer dashboard, to use this feature you would visit your dashboard and copy your unique HMAC key to your software and then perform a SHA-256 hash against our JSON payloads while using this shared key.

Image description

The new header where our hashes will be available is called http_x_signature and you'll only find it presented in our API results if you're making your query via TLS (HTTPS) and have visited your dashboard since this feature was added so that you can retrieve your unique HMAC key.

Whilst we are confident that none of our results are manipulated on route to you when using our encrypted TLS endpoint this expands upon that security for those with an elevated threat model.

That's the update for today, we will be updating our official PHP library to take advantage of this feature in the near future.


Introducing the Account Activity Log

Image description

Today we're excited to announce our new Account Activity Log feature. This tool provides a detailed record of all actions performed within your account, enhancing both transparency and security.

What does the Account Activity actually Log?

The Account Activity Log keeps track of all activities related to your proxycheck.io account. From logins and password changes to adjusting custom rules and lists, or even changing email preferences, every action is documented. This feature ensures you have a clear overview of your account activity.

Below is a screenshot showing a small example of some events, with the launch 50 different events will be recorded here and we'll add more as new features launch.

Image description

How to Access

Log into your proxycheck.io dashboard and click on the new account activity button found in the top right of the settings tab. Here, you can view all recorded events starting from today in an organized manner.

Looking Ahead

The Account Activity Log is part of our ongoing effort to enhance your account security and control. We've also today added location data to the login emails you receive which will enhance account security.

Thanks for reading and we hope you're having a wonderful week!


Dashboard Statistics Update

Image description

Today we've updated the graphs you'll find within your dashboard's stats tab to further break out the detection types shown to now include blacklisted entries and those triggered by a custom rule.

This change was made based on user feedback and it brings some much-needed consistency to the stats tab as you could previously only see blacklisted and custom rule entries in your positive detection log and active tag list but not in the graphs.

Below is how the new bar graphs look with the added data.

Image description

Within the bar graphs, we're also further breaking out blacklisted entries into their own bars for both IP addresses and email addresses. Currently, custom rules are not supported for disposable email checking which is why there's no separate entry for those at this time.

And below is how the new line graph looks, we've also now locked the colors which are used to represent specific data points so they're consistent between loads even if one or more pieces of data are absent.

Image description

To have your data populate for these new graphs you'll need to be using the latest version of our API dated the 22nd of January 2024. We've also updated the Dashboard APIs to make this data available there too. We know for some of you this has been a very desired feature and we're happy to oblige as the inconsistency between the graphs and logs had been overlooked for far too long.

Thanks for reading and have a wonderful weekend!


New API feature: Hostnames!

Image description

Today we've expanded the information we expose through our API to include hostnames. This has been an often requested feature which we've been working towards delivering at scale for some time, the reason it has taken us so long is because of the unique challenges presented by hostname data, such as:

  1. Because IPs have unique hostnames we cannot share hostname data across a large range of addresses like other data.
  2. Performing hostname lookups live to DNS servers as you perform an API request has a huge latency penalty (sometimes 1 sec+).
  3. There are billions of addresses we need to cache the hostnames for and the data must be synchronised across all our servers.

So to deliver on this feature we had to think very carefully, solve a few technical hurdles and perform a lot of testing. Hurting the API's responsiveness was the biggest concern we had going into this as we knew the data would be very large and cause a lot of in-memory cache misses that would result in expensive database queries.

So how are we accomplishing hostnames at scale?

Firstly, to tackle the latency issue we're going to cache the hostnames for every IPv4 address and through some clever compression we've devised we're able to get our hostname database to a small size while also making it extremely efficient to read from and write to. As a result, in our testing there is no measurable impact on the latency of the API.

Secondly, we're not going to perform live lookups of the hostnames we don't have cached. This won't impact IPv4 lookups as we intend to cache 100% of them at all times but for IPv6 this represents a hurdle that we're still working on. The issue with IPv6 addresses is the IPv6 address pool is so large that we cannot pre-compute them.

One option we explored was compressing the IPv6 addresses into contiguous ranges but that leads to inaccuracies in the data and still leaves an unfathomably large amount of data to cache and synchronise. So for the time being IPv6 will have experimental support only which means if we do have a hostname available for an IPv6 address we'll present it but don't count on these being available.

So let's show you how it looks on a live API result, we've got two outputs in the screenshot below and we've highlighted only the new hostname data in both.

Image description

To have hostnames show you'll need to either supply &asn=1 with your requests or utilise a hostname as a condition in your custom rules. If we don't have a hostname for an IP address it simply won't show one in the API output just like with our other data.

So that's the update for today, we know a lot of you have been waiting for this feature, we've had many requests for it over the years and it has taken some considerable time to deliver this for you but today the wait is at least partially over, we're still working on broad IPv6 support for this feature and hopefully we'll have an update for you on that later this year.

Thanks for reading and have a great week.


Custom Rules enhanced with Dividers

Image description

Since we introduced Custom Rules in 2019 it has continued to be one of our most popular features and as customers have become more familiar with it and we've expanded its feature set we're now seeing some customers with upwards of 100 custom rules in their account.

Last year we improved the interface for these power users by introducing the ability to hide deactivated rules and also search for rules based not only on their name but their rule content which includes searching both condition and output values.

Today we're adding another power user feature, dividers. This feature allows you to add dividers between and above rules so that you can visually separate rules that have different use cases. You can add as many separators as you like and we let you both name them and set the color of your dividers individually. Below is an example of how the feature looks when you've added a few dividers.

Image description

We wanted to make dividers very easy to use so you can simply click on the name of a divider to change it and drag the dividers around to move them like you can with rules. We also didn't want them to look visually cluttered so you only see the divider control buttons when you mouse over a divider like below.

Image description

And finally, we wanted you to be able to customise your dividers not just by name but with any color and level of transparency that you want. To that end, we've added a real-time full spectrum color picker which you'll see if you click on the Color button as shown below.

Image description

So that's the update for today, it's live in everyone's Dashboard right now and we hope you have a lovely weekend.


Back