New PHP library now available in Composer

Today we launched a brand new PHP Library which includes full v2 API support, all our features, query flags, custom query tagging and local country blocking support.

The best part, it's available within Composer which is the defacto dependency manager for PHP. You can find all the information about the package, how to install and use it over at its page on packagist.org here.

If you find any bugs please feel free to raise an issue on our GitHub page. We also welcome you to fork it, modify it and push changes which we will be more than happy to merge.

We hope everyone is having a great weekend!


Improved Last Update feature and backend changes

Last Update Improvements

You may have noticed that in 2017 we added a "Last Update:" feature to the bottom right of most web pages. When you hovered over it with your mouse a window would open displaying four or five recent changes to that page.

This is an important feature to us because it informs you of recent changes and it lets you know things are still being regularly updated. And it's not just great for seeing feature changes as we also use it on both our Privacy Policy and GDPR pages where we keep a record of policy changes.

Today we've updated the feature with a nicer appearance but most importantly we've made it better to see on smaller screens and added scrolling so we no longer need to restrict how many entries we place in the window. We've actually gone back and added many updates to the feature that were removed previously due to size constraints. Image description Above is a screenshot showing the Dashboard's last update window which now displays 23 updates going back to October last year.

Backend Changes

Recently we've been focusing more on backend changes that aren't very visible to users. This includes a complete overhaul of our cluster management code. In-fact just this morning we went live with our new cluster control system which stops race conditions between different nodes. This improves the reliability of our dashboard API's and all other features that take input data from customers. Essentially it stops data collisions which are caused by two or more nodes receiving data at the same time that conflicts with each other.

We also recently rewrote our proxy Inference Engine to gain higher accuracy and improved speed. This has been live since late may with impressive results.

Finally just a few days ago we completed a significant rewrite to our node syncing system with the goal of decreasing sync times and lowering CPU usage. We were able to reach both of those goals.

So that's everything we wanted to share with you today. We're committed to improving every facet of the service, not just the flashy things users can tangibly see and use but also the backend nitty-gritty which ensures the service stays fast and reliable for years to come.

Thanks for reading and we hope everyone is having a great week.


Dashboard and Web Interface enhancements

If you've been using the dashboard recently you may have noticed a few enhancements we've made to it. Including new preference toggles, new API setting options, enhanced whitelist-blacklist displays, a copy API key button and a new action result status indicator along the bottom of the screen.

We've made these changes to make the dashboard both look nicer and present to you more information so you know when a change you've made has been performed successfully.

Below is a screenshot showing the new preference toggles which replace our old tick-boxes.

The new toggles aren't just a visual improvement as you may notice we removed the "Save Preferences" button which means we now save your preferences as soon as you alter one of the toggles.

Below is a screenshot showing our new API Settings pane which is completely new.

The reason we added the first "Main API" toggle is because sometimes users get themselves locked out of their own website due to a misconfiguration of their client-side proxy checker code and this is simply a fast way to deny all checks made with your API Key so you can regain access to your property. This was added based on user feedback.

The second toggle is for security. Sometimes you may use your API Key in a less than secure place or implementation, for example in shipping software to your customers (which we do not recommend). But if you do want to use your key that way we don't want to stop you and so we've added this toggle so you can deactivate external dashboard access through our dashboard API's. What this means is, people cannot alter your white/black lists or view your account usage statistics by only knowing your API Key.

We've also added these new toggles to our refresh of the Web Interface page where we've changed the layout, moved the options to the far right and added a few more toggles for last-seen date and proxy type. As seen in the screenshot below.

Below is a screenshot showing our new Whitelist/Blacklist UI within the dashboard.

With the enhanced White/Blacklist UI you now receive three boxes along the top which display exactly how many IP Addresses, Ranges and ASN's were lifted from the text field below them. This is a great way for you to see if any entry you just added hasn't been detected.

Below is a screenshot showing our new action result status indicator.

You'll receive an indication like this whenever you change a preference, setting or whitelist/blacklist. And these indicators are not just visual, the blue successful indicator only appears if the request was actually performed successfully. You'll receive a red indicator like below if something went wrong.

To see all of these changes and some other subtle ones like the new help indicator icons and API copy button head on over to the customer dashboard. As always these changes are available to all customers whether you're on a free or paid plan.


Improved Email Preferences

Today in the customer dashboard we released an update that changes the wording in our email preference hover descriptions and we've included a new preference checkbox to enable near-immediate query exhausted emails which has been an often requested feature from customers. Below is a screenshot showing one of the new detailed hover descriptions.

Image description

We feel these new notices will do a better job of informing you about exactly the kinds of emails we will be sending and how often. By default the only emails we send you when you setup a new account are the ones displayed in the above hover description.

All our other email types including promotions, new features, daily query overages, total outages and changes are optional and opt-in only meaning you have to specifically enable them within your dashboard.

Thanks for reading and we hope everyone is having a great weekend.


It's GDPR day!

Today the General Data Protection Regulation (GDPR) comes into force in Europe. If you've been receiving a torrent of emails recently with regards to the regulation you may be wondering why exactly we didn't send you one.

Well the simple answer is, we've been GDPR compliant for more than a year. When we first started we had a very clear and concise privacy policy which is easy to read and understand (part of the GDPR stipulations). And when you signed up for our service the only "opt-out" email correspondence we had in your Dashboard was to disable important account related emails.

All our promotional emails have always been opt-in meaning you had to tick some boxes in your dashboard to specifically allow us to send you any of those types of emails. Like shown below.

On top of this, we only ever accepted the bare minimum of personal information from our customers. You'll notice that when you signup or even pay for a paid plan with us we don't ask for your name or address. Nor do we ask for your telephone number, gender, age or other personal information. We simply don't need that information to facilitate offering our product and so we don't ask for it.

Our final piece of compliance and probably the most important to you, we don't mine or sell your data or the data you send us about your customers (their IP Addresses or the services you operate that they're visiting) we don't use any third parties for sub-processing the data you send us or to process the information we have about you. Anything you send us doesn't leave our servers to a third party under any circumstances. Since we don't play around with your data we've had no need to ask for your permission by email to do things with it.

Under the GDPR language we're whats known as a data-processor. Since we process data on your behalf about your users, specifically you send us their IP Addresses and we tell you if they're using a Proxy or VPN service.

It's very important that we do not keep your users personal IP Addresses, and we don't. We don't keep them in our (very sparse) server logs and we do not commit them to persistant storage. The IP Addresses that you send us which we determine are not running a Proxy Server or a VPN are only stored temporarily in server memory and are purged from that memory on average within 15 minutes of us receiving them.

On top of that we never store negative IP determinations with our customer identifiers (API Keys). Meaning once you've sent an IP Address to us, if we determine it's not a Proxy or VPN server it is unlinked from your API Key so that no correlation between you and them can be made by us or a third party (in the unlikely event our servers were compromised).

The very last thing to mention about our compliance is security auditing and our use of strong encryption. We are constantly auditing our processes and our code to identify weak points and all our code is created with security in mind from the very start, not afterwards.

We have to date never had any data breaches or leaks and we store all passwords using strong bcrypt encryption, our cluster offers strong TLS 1.2 connectivity to you for all your website interactions and API Calls while our server nodes always send and receive between each-other using TLS 1.2 transport encryption and AES 256 for the blocks of data actually being synced.

We know that many of you are probably unenthused by the GDPR by now mostly due to the constant emails you've received leading upto today. But it's important, for too long companies have been misusing peoples personal information and although there have been many different regulations by specific European countries this is the first time that there is one federal level regulation that has real penalties and is easily identifiable by users so that they know their rights and know when companies are not fulfilling their obligations.

You can read both our Privacy Policy and GDPR pages if you'd like, but we've covered the main points in the post above for you.


WordPress Plugin Promotions

We know that choosing the right security partner can be difficult. There is a learning curve when integrating an API and you have to consider the costs associated with implementation and usage.

That's why we've been promoting third party plugins which integrate our API into your software, it removes some anxiety and makes implementation straightforward. One such plugin that we've been very fortunate to have made on our behalf for WordPress is called Proxy & VPN Blocker and it's made by an independent developer called Ricksterm.

It was released in December 2017 and has received regular and substantial updates since then. It was the first third-party plugin to integrate our v2 API and it's the only plugin we're aware of that can show your real-time query usage and positive detection log outside of our own customer dashboard.

Recently it gained the ability to protect not just login, registration and comment submission pages but also your entire website if you so choose while caching API responses to save you money and decrease its impact on your webpage loading times. In its most recent update it even gained the ability to block countries.

With WordPress being used by an estimated 30% of websites we feel this plugin is very important and so we want to incentivise our customers to donate towards the plugins continued development. To that end in partnership with Ricksterm we're offering two promotions.

If you donate $15 to the plugin through the WordPress plugin page here you will be given our 10,000 queries a day package for a period of one year. Usually this package costs $19.10 when purchased annually or $23.88 when paid for monthly.

We're also offering another promotion for the next package up. If you donate $30 you'll receive our 20,000 query package for a year which usually costs $38.30 when purchased annually or $48.88 when paid for monthly.

We (proxycheck.io) will not receive any of the money you donate, it will all go to Ricksterm who develops the WordPress plugin, we're merely giving you a free gift for your donation to him.

When donating either $15 or $30 please supply your email address to Ricksterm through the donation note feature so that he can pass it onto us to give your account the query volumes specified above. And of course make sure you signed up to proxycheck.io first!

One last thing to note, you don't need to use the queries exclusively with the WordPress plugin, they are normal queries exactly the same as you would receive when making a purchase through our own website so you can use them in any way and with any plugin or self-created implementation of our API.

We hope many of you will take advantage of the promotions as they represent quite significant savings. We intend to continue offering them for as long as the WordPress plugin remains actively supported by the developer.

If you've made or intend to make a plugin at this quality level please let us know, we would love to feature your plugin on our website and perhaps even support you with a similar promotion so you too can earn for your contribution to our API.

Thanks!


Sharing some performance metrics

Since we last updated our API 48 hours ago we've been recording the average TTFB (Time To First Byte) of our service at our CDN (Content Delivery Network) and we've been comparing those numbers to our previous numbers leading up-to the upgrade.

What we've found is a vast difference in performance with the new code far outperforming the old code. We've made a graph below showing the numbers and then we'll go into a brief analysis.

Image description

What you're seeing above is 48 hours of queries to our API leading up-to the code change in red and 48 hours after the code change in blue. On the far left is the percentage of queries and along the bottom is the time it took that percentage of queries to be answered by our API.

So these numbers include not just the processing time on our servers but also the network overhead in the time it takes to retrieve the answer from us over the internet.

As you can see in the graph the new code is vastly outperforming the old code. Where as before we were only answering 1.76% of queries within 25ms we're now answering 23.07% of queries within 25ms.

Where before we were answering 18.47% of all queries in under 100ms we're now answering 62.64% of all queries in under 100ms. Previously we answered 59.56% of all queries in under 200ms, now we're answering 90.96% of all queries in under 200ms.

With these changes it means you can now use the API in more latency sensitive deployments. We couldn't be more thrilled with these results and we've been very excited to share the difference in performance with you.

Thanks for reading and we hope everyone had a great weekend!


Updated v2 API with faster VPN and ASN lookups now live!

At the end of April we shared with you some performance numbers for the new update to our v2 API which enhances VPN and ASN lookup speed. Today we're pleased to announce that the update is now live on our v2 endpoint.

This update has been a large undertaking as we not only focused on speed but also improving accuracy. For over a year we have been painstakingly adding VPN providers to our dataset but frankly there are thousands upon thousands of datacenters all over the world that can at a moments notice offer service to any of the thousands of VPN providers operating globally.

So we set upon a new strategy. Firstly the way we were blocking VPN's previously (blocking ASN's that served specific datacenters) was a good strategy but it had some flaws like we couldn't make exceptions for companies that use these same ASN blocks for residential or business internet access. It also meant we often gave out the incorrect provider name for a VPN service when we blocked their ASN range.

With our new VPN code launched today both of those issues have been solved. We can now block ASN's while making exceptions for specific IP Ranges or providers and we always give you the most accurate provider name for a specific IP even if they share an ASN range with another company.

Another change we've made is we're now using a new Machine Learning system for VPN detection. This is a real-time inference engine which will make determinations for all queries that have the &vpn=1 flag. This new engine has already broadened our VPN detection rate by 8% in testing when combined with our previous VPN detection methods.

The last thing we wanted to discuss is our Real-Time Inference Engine for proxy detection. With this update to v2 where we've introduced the new VPN Inference Engine we have made quite a performance breakthrough. By using enhanced math functions in the processors of our nodes combined with pre-computing computationally heavy instructions and storing their results we have been able to greatly reduce inference time from an average of 250ms to just 1.5ms. This is why we have not added a disable feature for the Inference Engine when performing VPN checks, it's simply so fast there was no need.

And that brings me to the benchmarks. In our testing with VPN, ASN and Inference checks enabled, supplying the API with 1,000 IP's in a single query it would previously take up the entire 90 second query window and only check 300 of the 1,000 IP's.

With the new code we're able to supply 10,000 IP Addresses with the same flags enabled and receive results for all 10,000 addresses within 10.5 seconds. This is a vast improvement which means you no longer need to forgo VPN, ASN or Inference Checks to get the fastest results possible. For single queries checking a single address we're seeing a consistent query time of under 6ms (after network overhead).

If you're not already using our v2 API we highly recommend the upgrade, not only is the detection for VPN's more accurate but the speed enhancements are unreal. We have ported some of this functionality back to v1 just to maintain compatibility but we cannot guarantee it will be as fast. As always all of these new features are available immediately to all customers whether you're on a paid or free plan.

Thanks for reading, we hope everyone has a great weekend.


Upcoming improvements to VPN and ASN results

When we launched our recent refresh of the v2 API in March we spoke about some of the things we were planning for the near future including VPN and ASN lookup speed enhancements.

Today we're ready to share with you a preview of those performance enhancements and they are quite significant. So firstly we'd like to show you how long it takes to check 100 IP Addresses under the current v2 API when checking both for VPN's and ASN's but with the Real-Time Inference Engine turned off.

Current v2 API with VPN and ASN checks enabled: 100 IP Addresses in one query

Query Time: 22.326 Seconds

And now with our new code, with the same level of accuracy.

New v2 API with VPN and ASN checks enabled: 100 IP Addresses in one query

Query Time: 0.452 Seconds

That's a dramatic improvement. But look what happens when we check 1,000 IP Addresses using the new code.

New v2 API with VPN and ASN checks enabled: 1,000 IP Addresses in one query

Query Time: 2.882 Seconds

And this is with no caching, all of these addresses were generated randomly and haven't been put through the API previously. With this kind of speed improvement it means there's no reason not to enable VPN and ASN checks any longer. We've found in testing that the previous code would take between 250ms and 350ms for both a VPN and ASN reply on a single address within a single query.

But with the new code we're seeing results of between 6ms and 10ms (depending on the node answering the query) for a single address in a single query and between 2-3ms per IP when performing a multi-check. These are huge improvements and it's not just about speed we're also enabling enhanced VPN checks with this new code so that we can detect VPN's more efficiently.

We think we'll be rolling this update out later this week on our v2 API, you won't need to alter any of your client side code as the result format from the API is not being altered.

Thanks for reading and have a great week!


New Cluster Node: ZEUS!

Image description

For some time we've been looking for a new server to add as a node within our cluster to replace ATLAS, one of our current nodes.

We added ATLAS to the cluster last year mainly as a way to get some more redundancy. The chances of three geographically separated servers going down at the same time is higher than two.

But as the queries have increased more than 10x what they were when we added ATLAS it has come time to let it go and for us to replace it with a more capable server.

Here are the specs of ATLAS, HELIOS and PROMETHEUS.

  • ATLAS: Core i3, 3.3GHz, 2 Cores, 2 Threads. 8GB of RAM. - 100Mbps network
  • HELIOS: Core i7, 3.4GHz, 4 Cores, 8 Threads, 16GB of RAM - 1Gbps network
  • PROMETHEUS: XEON E5, 3.6GHz, 16 Cores, 32 Threads, 64GB of RAM. - 400Mbps network

As you can see ATLAS is by far the weakest node and although it served its duty by giving us the redundancy we wanted it simply couldn't keep up with some of our more demanding features such as syncing customer statistics and the inference engine. It was essentially pegged at 95% to 100% CPU load practically all day, every day.

So instead of adding a forth node and keeping ATLAS we've decided to get rid of ATLAS and replace it with a brand new node. Here is the specification of the new ZEUS node.

  • ZEUS: XEON E3, 3.7GHz, 4 Cores, 8 Threads, 32GB of RAM. - 1Gbps network

The new node is online within our cluster right now and we will be removing ATLAS soon, perhaps even by the time you see this post. It has served us well and we say farewell to ATLAS!

We are still looking to add servers worldwide we may very well add a forth server later this year which is a similar specification to HELIOS or ZEUS.

Thanks for reading and have a great weekend!


Back