It's GDPR day!

Today the General Data Protection Regulation (GDPR) comes into force in Europe. If you've been receiving a torrent of emails recently with regards to the regulation you may be wondering why exactly we didn't send you one.

Well the simple answer is, we've been GDPR compliant for more than a year. When we first started we had a very clear and concise privacy policy which is easy to read and understand (part of the GDPR stipulations). And when you signed up for our service the only "opt-out" email correspondence we had in your Dashboard was to disable important account related emails.

All our promotional emails have always been opt-in meaning you had to tick some boxes in your dashboard to specifically allow us to send you any of those types of emails. Like shown below.

On top of this, we only ever accepted the bare minimum of personal information from our customers. You'll notice that when you signup or even pay for a paid plan with us we don't ask for your name or address. Nor do we ask for your telephone number, gender, age or other personal information. We simply don't need that information to facilitate offering our product and so we don't ask for it.

Our final piece of compliance and probably the most important to you, we don't mine or sell your data or the data you send us about your customers (their IP Addresses or the services you operate that they're visiting) we don't use any third parties for sub-processing the data you send us or to process the information we have about you. Anything you send us doesn't leave our servers to a third party under any circumstances. Since we don't play around with your data we've had no need to ask for your permission by email to do things with it.

Under the GDPR language we're whats known as a data-processor. Since we process data on your behalf about your users, specifically you send us their IP Addresses and we tell you if they're using a Proxy or VPN service.

It's very important that we do not keep your users personal IP Addresses, and we don't. We don't keep them in our (very sparse) server logs and we do not commit them to persistant storage. The IP Addresses that you send us which we determine are not running a Proxy Server or a VPN are only stored temporarily in server memory and are purged from that memory on average within 15 minutes of us receiving them.

On top of that we never store negative IP determinations with our customer identifiers (API Keys). Meaning once you've sent an IP Address to us, if we determine it's not a Proxy or VPN server it is unlinked from your API Key so that no correlation between you and them can be made by us or a third party (in the unlikely event our servers were compromised).

The very last thing to mention about our compliance is security auditing and our use of strong encryption. We are constantly auditing our processes and our code to identify weak points and all our code is created with security in mind from the very start, not afterwards.

We have to date never had any data breaches or leaks and we store all passwords using strong bcrypt encryption, our cluster offers strong TLS 1.2 connectivity to you for all your website interactions and API Calls while our server nodes always send and receive between each-other using TLS 1.2 transport encryption and AES 256 for the blocks of data actually being synced.

We know that many of you are probably unenthused by the GDPR by now mostly due to the constant emails you've received leading upto today. But it's important, for too long companies have been misusing peoples personal information and although there have been many different regulations by specific European countries this is the first time that there is one federal level regulation that has real penalties and is easily identifiable by users so that they know their rights and know when companies are not fulfilling their obligations.

You can read both our Privacy Policy and GDPR pages if you'd like, but we've covered the main points in the post above for you.


WordPress Plugin Promotions

We know that choosing the right security partner can be difficult. There is a learning curve when integrating an API and you have to consider the costs associated with implementation and usage.

That's why we've been promoting third party plugins which integrate our API into your software, it removes some anxiety and makes implementation straightforward. One such plugin that we've been very fortunate to have made on our behalf for WordPress is called Proxy & VPN Blocker and it's made by an independent developer called Ricksterm.

It was released in December 2017 and has received regular and substantial updates since then. It was the first third-party plugin to integrate our v2 API and it's the only plugin we're aware of that can show your real-time query usage and positive detection log outside of our own customer dashboard.

Recently it gained the ability to protect not just login, registration and comment submission pages but also your entire website if you so choose while caching API responses to save you money and decrease its impact on your webpage loading times. In its most recent update it even gained the ability to block countries.

With WordPress being used by an estimated 30% of websites we feel this plugin is very important and so we want to incentivise our customers to donate towards the plugins continued development. To that end in partnership with Ricksterm we're offering two promotions.

If you donate $15 to the plugin through the WordPress plugin page here you will be given our 10,000 queries a day package for a period of one year. Usually this package costs $19.10 when purchased annually or $23.88 when paid for monthly.

We're also offering another promotion for the next package up. If you donate $30 you'll receive our 20,000 query package for a year which usually costs $38.30 when purchased annually or $48.88 when paid for monthly.

We (proxycheck.io) will not receive any of the money you donate, it will all go to Ricksterm who develops the WordPress plugin, we're merely giving you a free gift for your donation to him.

When donating either $15 or $30 please supply your email address to Ricksterm through the donation note feature so that he can pass it onto us to give your account the query volumes specified above. And of course make sure you signed up to proxycheck.io first!

One last thing to note, you don't need to use the queries exclusively with the WordPress plugin, they are normal queries exactly the same as you would receive when making a purchase through our own website so you can use them in any way and with any plugin or self-created implementation of our API.

We hope many of you will take advantage of the promotions as they represent quite significant savings. We intend to continue offering them for as long as the WordPress plugin remains actively supported by the developer.

If you've made or intend to make a plugin at this quality level please let us know, we would love to feature your plugin on our website and perhaps even support you with a similar promotion so you too can earn for your contribution to our API.

Thanks!


Sharing some performance metrics

Since we last updated our API 48 hours ago we've been recording the average TTFB (Time To First Byte) of our service at our CDN (Content Delivery Network) and we've been comparing those numbers to our previous numbers leading up-to the upgrade.

What we've found is a vast difference in performance with the new code far outperforming the old code. We've made a graph below showing the numbers and then we'll go into a brief analysis.

Image description

What you're seeing above is 48 hours of queries to our API leading up-to the code change in red and 48 hours after the code change in blue. On the far left is the percentage of queries and along the bottom is the time it took that percentage of queries to be answered by our API.

So these numbers include not just the processing time on our servers but also the network overhead in the time it takes to retrieve the answer from us over the internet.

As you can see in the graph the new code is vastly outperforming the old code. Where as before we were only answering 1.76% of queries within 25ms we're now answering 23.07% of queries within 25ms.

Where before we were answering 18.47% of all queries in under 100ms we're now answering 62.64% of all queries in under 100ms. Previously we answered 59.56% of all queries in under 200ms, now we're answering 90.96% of all queries in under 200ms.

With these changes it means you can now use the API in more latency sensitive deployments. We couldn't be more thrilled with these results and we've been very excited to share the difference in performance with you.

Thanks for reading and we hope everyone had a great weekend!


Updated v2 API with faster VPN and ASN lookups now live!

At the end of April we shared with you some performance numbers for the new update to our v2 API which enhances VPN and ASN lookup speed. Today we're pleased to announce that the update is now live on our v2 endpoint.

This update has been a large undertaking as we not only focused on speed but also improving accuracy. For over a year we have been painstakingly adding VPN providers to our dataset but frankly there are thousands upon thousands of datacenters all over the world that can at a moments notice offer service to any of the thousands of VPN providers operating globally.

So we set upon a new strategy. Firstly the way we were blocking VPN's previously (blocking ASN's that served specific datacenters) was a good strategy but it had some flaws like we couldn't make exceptions for companies that use these same ASN blocks for residential or business internet access. It also meant we often gave out the incorrect provider name for a VPN service when we blocked their ASN range.

With our new VPN code launched today both of those issues have been solved. We can now block ASN's while making exceptions for specific IP Ranges or providers and we always give you the most accurate provider name for a specific IP even if they share an ASN range with another company.

Another change we've made is we're now using a new Machine Learning system for VPN detection. This is a real-time inference engine which will make determinations for all queries that have the &vpn=1 flag. This new engine has already broadened our VPN detection rate by 8% in testing when combined with our previous VPN detection methods.

The last thing we wanted to discuss is our Real-Time Inference Engine for proxy detection. With this update to v2 where we've introduced the new VPN Inference Engine we have made quite a performance breakthrough. By using enhanced math functions in the processors of our nodes combined with pre-computing computationally heavy instructions and storing their results we have been able to greatly reduce inference time from an average of 250ms to just 1.5ms. This is why we have not added a disable feature for the Inference Engine when performing VPN checks, it's simply so fast there was no need.

And that brings me to the benchmarks. In our testing with VPN, ASN and Inference checks enabled, supplying the API with 1,000 IP's in a single query it would previously take up the entire 90 second query window and only check 300 of the 1,000 IP's.

With the new code we're able to supply 10,000 IP Addresses with the same flags enabled and receive results for all 10,000 addresses within 10.5 seconds. This is a vast improvement which means you no longer need to forgo VPN, ASN or Inference Checks to get the fastest results possible. For single queries checking a single address we're seeing a consistent query time of under 6ms (after network overhead).

If you're not already using our v2 API we highly recommend the upgrade, not only is the detection for VPN's more accurate but the speed enhancements are unreal. We have ported some of this functionality back to v1 just to maintain compatibility but we cannot guarantee it will be as fast. As always all of these new features are available immediately to all customers whether you're on a paid or free plan.

Thanks for reading, we hope everyone has a great weekend.


Upcoming improvements to VPN and ASN results

When we launched our recent refresh of the v2 API in March we spoke about some of the things we were planning for the near future including VPN and ASN lookup speed enhancements.

Today we're ready to share with you a preview of those performance enhancements and they are quite significant. So firstly we'd like to show you how long it takes to check 100 IP Addresses under the current v2 API when checking both for VPN's and ASN's but with the Real-Time Inference Engine turned off.

Current v2 API with VPN and ASN checks enabled: 100 IP Addresses in one query

Query Time: 22.326 Seconds

And now with our new code, with the same level of accuracy.

New v2 API with VPN and ASN checks enabled: 100 IP Addresses in one query

Query Time: 0.452 Seconds

That's a dramatic improvement. But look what happens when we check 1,000 IP Addresses using the new code.

New v2 API with VPN and ASN checks enabled: 1,000 IP Addresses in one query

Query Time: 2.882 Seconds

And this is with no caching, all of these addresses were generated randomly and haven't been put through the API previously. With this kind of speed improvement it means there's no reason not to enable VPN and ASN checks any longer. We've found in testing that the previous code would take between 250ms and 350ms for both a VPN and ASN reply on a single address within a single query.

But with the new code we're seeing results of between 6ms and 10ms (depending on the node answering the query) for a single address in a single query and between 2-3ms per IP when performing a multi-check. These are huge improvements and it's not just about speed we're also enabling enhanced VPN checks with this new code so that we can detect VPN's more efficiently.

We think we'll be rolling this update out later this week on our v2 API, you won't need to alter any of your client side code as the result format from the API is not being altered.

Thanks for reading and have a great week!


New Cluster Node: ZEUS!

Image description

For some time we've been looking for a new server to add as a node within our cluster to replace ATLAS, one of our current nodes.

We added ATLAS to the cluster last year mainly as a way to get some more redundancy. The chances of three geographically separated servers going down at the same time is higher than two.

But as the queries have increased more than 10x what they were when we added ATLAS it has come time to let it go and for us to replace it with a more capable server.

Here are the specs of ATLAS, HELIOS and PROMETHEUS.

  • ATLAS: Core i3, 3.3GHz, 2 Cores, 2 Threads. 8GB of RAM. - 100Mbps network
  • HELIOS: Core i7, 3.4GHz, 4 Cores, 8 Threads, 16GB of RAM - 1Gbps network
  • PROMETHEUS: XEON E5, 3.6GHz, 16 Cores, 32 Threads, 64GB of RAM. - 400Mbps network

As you can see ATLAS is by far the weakest node and although it served its duty by giving us the redundancy we wanted it simply couldn't keep up with some of our more demanding features such as syncing customer statistics and the inference engine. It was essentially pegged at 95% to 100% CPU load practically all day, every day.

So instead of adding a forth node and keeping ATLAS we've decided to get rid of ATLAS and replace it with a brand new node. Here is the specification of the new ZEUS node.

  • ZEUS: XEON E3, 3.7GHz, 4 Cores, 8 Threads, 32GB of RAM. - 1Gbps network

The new node is online within our cluster right now and we will be removing ATLAS soon, perhaps even by the time you see this post. It has served us well and we say farewell to ATLAS!

We are still looking to add servers worldwide we may very well add a forth server later this year which is a similar specification to HELIOS or ZEUS.

Thanks for reading and have a great weekend!


New Status Code System

Today we've introduced a new status code and message system to the v2 API. This was prompted by a user request and we felt it was a very useful feature for the API to have, standardising our errors and warnings will make the API easier to code against.

We have updated our API Documentation with the new information and below is a screenshot of that new section.

Image description

We hope you all like the change we were careful not to break compatibility with any v2 supporting clients while implementing the new status system.

Thanks!


WordPress plugin update!

Today Ricksterm an independent developer who made the WordPress Proxy & VPN Blocker plugin has released a major update to his plugin which adds the ability to perform checks across your entire WordPress website. Previously it supported checking and blocking only on signup, login and comment posting pages but now you can choose to enable it site-wide!

Image description

This has been a much requested feature by users and we thank him for his continued support of the plugin with frequent substantial updates. Alongside this new feature it has also gained an improved stats view with support for showing countries and a new slider which lets you adjust the detection sensitivity.

Image description


If the service is free then you are the product

The title of this post is a common phrase you will read online when viewing forum posts within privacy minded communities. And in general it's true and has been true for as long as products have been offered for "free" to consumers.

Since we started we've had customers enquire about what we're doing with the data they send us. Specifically when they send us a customers IP Address do we correlate that with their web property and then sell that information.

For example if you operated a store that sold Guitars and you use our service for your registration or checkout system are we recording the IP Addresses you send us for proxy checking and then handing that data off to a marketing company so they can run targeted ads to your visitor for Guitar related products.

With the recent Cambridge Analytica disclosures involving Facebook we've been asked this question much more frequently than before and we thought it would be a good idea to write a blog post about our stance on this.

So the question is, do we sell your information? and the answer is no, we do not sell your information. Infact we do not make available any of the data our customers entrust with us. The only third parties we ever allow to handle your data in any way are Stripe which is our card payment processor and mailgun and both of these companies only receive the bare minimum of your personal information to perform the duties we've entrusted with them.

For Stripe that means your bank card information to perform transactions and for mailgun that means your email address. Beyond that they don't receive anything else and neither does anyone else. We simply do not make available customer information in any form even as aggregate data to any third party, period.

Now of course the question is if our free customers aren't our product how are we staying profitable? Well our business model is built around converting free customers to paid customers. We give unregistered users 100 queries per day and we give registered users 1,000 queries per day. Both for free.

Then as those customers needs grow, meaning they're regularly making over 1,000 queries per day we attempt to convert them into paying customers. We do this in a few ways, firstly the stats on the dashboard help users to determine their own query volume needs and secondly when you go over your query allotment for five days in a row we send you a single email to let you know.

Essentially a single $29.99 subscription which is our most popular paid plan right now can subsidise the usage of several hundred free users. That's part of what enables us to offer a very competitive free plan with feature parity to our paid plans.

The other part is that we designed proxycheck.io from day one to scale across multiple servers. Not just the API but every facet of our service like our website which includes the customer dashboard and web interface. As the queries hitting our API have grown we've been able to efficiently meet that increasing demand with very little waste due to the cluster.

With some of our competitors infrastructure we've seen them place free customers on one server and paid customers on another server. We've also seen competitors setup single dedicated servers just for single paid customers. While that sounds very premium on the surface, the reality is that's increasing the chance of failure and it's very inefficient business wise as you will have under-utilised resources which you have to pay for regardless of that servers actual usage, it also makes their premium plans exceedingly and in some cases outrageously pricey. Essentially you pay more, but you get less.

Our custom cluster architecture has allowed us to maximise our resource use so that all of our customers benefit equally from the increased performance and redundancy that adding more servers to the cluster brings while keeping our costs low as we don't have to keep paying for under-utilised servers. All of that means we can offer our generous free plans while respecting all of our customers privacy.

When companies sell their customers data while also having paid plans we call that double dipping. Frankly we think the privacy situation globally right now is in a very poor state and we don't want to be a part of the problem. We have welcomed the GDPR (General Data Protection Regulation) because for too long internet companies have been operating like it's the wild west when it comes to user privacy and user data ownership rights.

We hope this blog post has informed you on our stance, we have no plans to make available customer data to third parties, frankly we don't want to know who your website visitors are or what your website does. All we're interested in is making the best Proxy and VPN detection API at the lowest possible cost and we can certainly do that without invading anyones privacy.

Thanks for reading and have a great week!


v2 API adoption rates

On January 1st 2018 we introduced our v2 API which was the first new version of our API since we started in April 2016. Sure we'd re-coded v1 quite significantly several times since we launched but the v2 rewrite changed everything. It really was a from the ground up re-implementation of our API with almost no shared code between v1 and v2 due to the main feature we wanted to implement which was the ability to check multiple IP's with a single query, a feature we call multi-checking.

I'm pleased to say that since the launch we have seen quite a high degree of registered users utilising the new API. In-fact 46.37% of all registered users are now making use of the v2 API and as of right now 79.54% of all queries made today by registered users were to the v2 API endpoint. Our largest customers are the fastest movers in this regard and have jumped onto the v2 endpoint very quickly.

For a new product that's only just over three months old that's incredible adoption and it means we're getting our messages out to our customers and they are trusting us to make the right decisions with a product they rely on every day.

When it comes to unregistered users the adoption rate is quite a bit lower, only 11.56% of unregistered users that performed any query to the API today did so to our v2 endpoint. We expect this because many of these users are using 3rd-party software solutions that are still using the older v1 API and these users have not configured the software and so have not got an API key yet.

We're still two years away from us ending support for v1 so these numbers are incredibly encouraging to us. The third-party software that has implemented our v1 API will likely get updated or replaced before then but we're leaning towards creating a simple redirect endpoint at v1 which will forward queries to the v2 endpoint and translate the queries back into a v1 format. This won't take much effort from us and guarantee no one gets left behind, we know sometimes people implement an API and then forget about it and we don't want to leave anyone unprotected.

So that's the update we wanted to share, the high adoption rate among our registered customers has been a great win for us and with the new v2 specific features we launched in March it has only accelerated the adoption rate. We don't poll users for their satisfaction of our service but many like to write in and tell us anyway and I'm pleased to say the satisfaction rate is incredibly high, we've been able to provide a stable service whilst responding to new feature requests from our customers.

We love to hear from all of you, if you have a new feature idea, found a bug or just want to tell us what you think of the service please get in touch and let us know!


Back