Yearly Subscriptions

One of the things our customers said to us when we switched to monthly subscriptions is that some of them just do not want a monthly subscription. They don't want money coming out of their bank account each month, they don't want it on their statements. They'd rather pay for an entire year up front so they don't need to think about it again for a year.

Which is a completely valid perspective, we can understand that. Here at proxycheck.io we have to pay for things like domain names, hosting, password managers, virtual private networks and we too prefer to pay for a year up front for the same reasons. And it helps that when you pay for a year up front you usually save some money.

So in the interest of choice we've broadened our subscriptions, we now offer both monthly and yearly subscriptions. Essentially we offer a year for every monthly plan we have with the same query volumes but if you enter into a yearly subscription you save 20% over holding a monthly subscription for 12 months.

So essentially you can try the service for a month with the query amount you need and once you're sure you like the service and it meets your needs you can choose to pay for an entire year up front and save 20%. But if you like the flexibility of paying for the service month-to-month you can continue to do that too.

We've updated our pricing page to reflect the new plan options and you can subscribe for a yearly plan from your dashboard right now. We hope you all like the new changes they are a direct result of your feedback.


New subscription pricing

Since we changed from yearly to monthly subscriptions we've had a lot of feedback from customers who purchased our prior yearly plans who felt that the new subscriptions were not providing the same value and they were concerned that when their paid year plans ran out they would not be able to afford a monthly subscription with the query amounts they needed.

It was only our very high paid tier holders that received a better value than previously and they are in the minority. Most of our sales were around $120 and down (for an entire year).

So we've listened to your feedback and looked at the numbers and we have decided that we can lower the monthly prices and create a more linear payment approach that makes sense for smaller developers. Here is our new pricing:

Now previously our lowest subscription plan started at $5 per month for 10,000 queries. We now have two plans lower than that which are 10K for $1.99 and 20K for $3.99. (Those of you who already subscribed to our monthly plans have been automatically transferred over to the most affordable paid subscription which has the same daily query limits you paid for and you have been refunded the difference that you already paid)

Similarly our most popular middle plan which is 80,000 daily queries used to cost $120 a year but when it became a monthly subscription it became $20 per month which is $240 a year. With our new prices though it becomes just $7.99 a month which is $95.88 a year, less than half its previous monthly cost and still lower than our previous yearly pricing.

We hope the new pricing will help to ease any fears that the service has become too expensive, your feedback is invaluable to us, without it we probably would have kept the higher pricing for a much longer time period and that wouldn't have been a good thing as it's not our intention to shut smaller developers out of our service, we want everyone to be able to protect their service no matter their size.

The new pricing also makes a lot more sense for people on our free tier, it's much easier to accept a jump from FREE to $1.99 than it is to $5. We're not afterall netflix or spotify, charging $5 for the smallest paid subscription just didn't feel right.

For many people who need just a few more queries than 1,000 they're only protecting a hobby. Be it a discussion forum, online computer game or login forms on their blog. They shouldn't be burdened to pay $5 for something that doesn't make them any money and so we feel the $1.99 price which is less than the cost of a coffee a month is more than enough to satisfy those kinds of needs while still being enough for us to pay for our servers.

Thanks for reading and as always please feel free to write us at [email protected] just like many of you already did which resulted in the lowered pricing we've announced today.


Ocebot update

On July 4th we wrote a post about our new software robot called Ocebot (a combination of the words Ocelot and Bot). And today we'd like to give you some insight into what we discovered as we pour over the past 10 days of data since that post.

Before we get into the data though lets just run through the kinds of things Ocebot has been doing.

  1. Querying the API around once a minute for 10 days straight
  2. Making proxy only and VPN requests
  3. Making queries it already knows the answer to
  4. Making malformed queries to see how the API responds
  5. Forcing the server to take detailed server-side analytics when answering Ocebot queries

So from this we've gleaned a few things. Firstly the response time of the API is excellent on a positive detection with most queries being answered under 50ms with network overhead. For negative detections (meaning every single level of check is performed) the average time is 250ms, again this is with network overhead but without TLS turned on.

The second thing we found is that the response times are very consistent, our averages aren't changing throughout the day and we're not seeing much of any difference between our nodes in the time they take to answer a query which is a good thing as slow nodes would create inconsistency for our customers.

The third thing we found were some edge cases in our code that could create a high latency response due to logging of errors. We're talking in the millisecond range here but when we're trying to give responses as fast as possible every millisecond counts.

The fourth thing we found were some optimisations to our cluster database syncing system. Through the server side analytics we were able to discover high CPU usage caused by the encryption of data that is to be synced to our other nodes in the cluster. Essentially before we send any data to any other node in the cluster through our persistent machine to machine data tunnel we encrypt it with AES256.

This can be CPU intensive if the data being transferred is always changing and thus requiring lots of database updates to other nodes. By looking at the Ocebot data we could see there were a lot of things being synced that didn't need to be, lots of high activity data alterations that are only really important to the machine handling your API query and are not needed by the other nodes in the cluster.

And so what we've done is moved some of these to a local cache on the nodes making the requests when the data isn't ever going to be needed by another node.

The other thing we've done is some data does need to be shared with other nodes but not immediately and so we've added some granularity to how frequent certain pieces of data are synced so we can benefit from update coalescing, meaning combining multiple smaller database updates into one larger database update that is transferred to other nodes less frequently.

By doing it this way we've been able to significantly reduce the CPU usage of our cluster syncing system and thus increase their API response throughput (hypothetically) in the future when we're closer to full node utilisation.

Our experiments with Ocebot are ongoing, already we've discovered some incredibly useful information that has directly improved proxycheck.io. Over the next few weeks we will be enhancing Ocebot so it can perform tests on our new Inference Engine, not to judge accuracy but to gauge performance and to make sure it's getting faster at making determinations.

Thanks for reading and have a great day!


Introducing the proxycheck.io inference engine

Prior to today proxycheck.io's data was scraped from many websites across the globe. The kind that list proxies for sale or for free use. But we've been working on introducing our own inference engine for some time now.

Put simply this is a type of machine learning where our service gathers information about an IP Address and then through those evidence based facts, draws likely conclusions about whether that IP is operating as a proxy server.

At this time we're only putting the positive detections made by the inference engine into our data, when it has a confidence level of 100%. In human terms this would be the equivalent of an investigator catching a perpetrator in the act of a crime and not based on a judgement call or flip of a coin.

We're doing it this way because accuracy is our number one priority, if we're not confident that an IP Address is operating as a proxy server it's pointless to say it is in our API responses.

The other caveat here is that figuring out if an IP Address is operating as a proxy server or not takes time. The inference engine will get faster over time but to get the kind of extremely accurate detections we care about we have to do the processing after your queries are made.

What this means is, whenever you perform a query on our API that results in a negative detection that IP Address is placed in a queue to be processed by the inference engine and if determined to be a proxy server it will enter into our data. In testing we believe we can do an accurate processing of each IP Address in around 5 minutes after the first negative result.

Now obviously having the IP processed after you've already told us about it and after you've already received a negative result from us isn't that useful for you, but as we're seeing millions of queries a day and proxy servers are used all over the internet for comment spam, automated signups on forums and click fraud it means we have been given a giant window from which we can analyse the IP Addresses that matter most.

We could for example scan the entire internet address space and detect thousands of proxy servers out of the 4 billion possibilities on IPv4 alone, before we even think about IPv6. But that would be incredibly wasteful of our resources and abusive to the internet at large. By only scanning the addresses that are performing tasks on your services (the same ones proxy servers are used for) it means we're targeting and training our engine on the data that matters.

During our testing we supplied the engine with 100,000 negative detections from our own API from the past day and we found 0.4% of those addresses to be operating as proxy servers. That's around 400 proxy servers that we previously had no knowledge of that are now detected by our API for the next 90 days minimum.

We're absolutely thrilled by the results and as our service grows with more developers using the API the inference engine will become a major source of proxy data for us. At the moment we have two versions a static non-learning version which is in production with total confidence from us and zero false positives.

And then we also have a development version which is working from the same data as the production version but with learning enabled, results from the development version are not saved into our production database. So over time our inference engines detection rate will rise from the current 0.4% as it becomes more intelligent through iterative machine learning.

Thanks for reading we hope you enjoyed this post, if you're watching your API responses look out for proxy: Yes and type: Inference Engine!


Live Chat and Bug Fixes

Live Chat!

As of a few days ago we're now featuring a live support chat feature on all our webpages. The reason we've done this is so that you can get instant support without needing to use Skype or iMessage.

The best part of our live chat is it's manned by our developers, we're not outsourcing the support chat. This means you can not just receive pre-sales information from the chat but also account and payment level support. We can handle any query through the live chat that previously you would need to use our Skype, iMessage or email support for.

But of course the new live chat is optional we're still offering email, skype and iMessage support and that's not changing.

Bug Fixing

The other bit of news we wanted to discuss was our Dashboard. A few weeks ago we altered the way the Dashboard was handled server side to be more secure but it had some unintended negative affects which didn't show in our testing. They were mostly just niggly bugs for example:

  1. Setting/Changing your Password or API Key logged you out of the dashboard after performing the changes
  2. No email was sent if you changed your password (but one was sent if a password was set)
  3. Some errors were not handled correctly causing blank pages

So yesterday we did a full audit of the Dashboard code and we also tested every feature within it. We found numerous minor issues mostly visual bugs after certain requests were made. We went to work on all of these problems and solved all of the ones listed above.

We also finally added an account recovery feature which enables you to generate a new password for your account in a secure way in the event you lose access to your account due to losing your password. This was a planned feature since the moment we added password security to accounts but we have been working mostly on the API and other new features like account stats, blacklist/whitelist support and so forth.

As of two months ago we have a proper priority based development ledger which maintains a list of all the features and bugs we still have to implement or fix. The ledger is prioritising bugs and as of this post we have cleared all the bugs the ledger had listed. If you come across any bugs please shoot us an email or live chat and we'll get right on them!

For a little insight into what we're working on next, it's our email notices. At the moment they are quite inconsistent in their layout and wording. We intend to unify all of our emails in appearance.

Thanks for reading and have a great day!


New server ATLAS added to our server cluster

Today we've added a new node into our cluster it's a dedicated server we've named ATLAS. This new server is already serving your queries and is viewable on our service status page.

It is our aim to have proxycheck.io always be accessible which means our goal is to never have any downtime. We're mitigating the risk of downtime by not only renting servers in different data centers but also hosted by different companies and in different countries and next year we aim to have servers in entirely different continents.

Currently we have PROMETHEUS in the United Kingdom, HELIOS in Germany and now ATLAS in France. The next time we discuss nodes we hope to have a server operational in North America.

As always our cluster operates transparently to users, you do not need to specify which node your traffic goes to, your queries are routed automatically by us and our cluster is used to answer all queries, not just paid queries but free ones too.

With the volume of queries we're receiving per day reaching into the millions we decided to add a third server sooner rather than later, not because we're maxing out the servers we already had (we were not close to that point due to our efficient API backend) but because we felt it was important to add more redundancy to the cluster as our customer base grows.

We hope you're finding our blog posts interesting, it's an enjoyable way to tell our story and inform people about what we're up to. The service is constantly being worked on behind the scenes and although you may notice some visual changes to the site here or there it's mostly the things you don't see which are being worked on most of all.

Thanks for reading and have a great day!


New email notifications

When we first started last year we didn't send our users any emails except when they first signed up to our service. But since then we've added a lot of functionality and there are some account events you definitely want to be kept informed about. For example we send you emails if the following occurs:

  1. Your account has a password set or changed
  2. Your email address is changed
  3. Your API Key is changed
  4. Your paid plan ends

And as of a few days ago we now send you an email if you go over your daily query allotment for five consecutive days. But don't worry we limit these emails to one per 30 days as not to be spammy.

If you don't want to receive these emails at all you can of course toggle them off from the dashboard by unticking the Send me important emails related only to my account checkbox.

This is the only email change we have made, we think it will really help our customers because based on our internal statistics about 5% of our daily active users are going over their query limits and as our service is the kind of thing you setup once and forget about they may never realise that some of their queries are being denied by our API.

Already since we added the new notification emails we've seen two customers alter their querying behaviour and a third customer go from a free plan to a paid plan. All of these customers were notified by our software that they had been going over their query allotments for five days in a row.

So the important thing for us was striking the right balance between keeping our customers informed and yet not burdening them with lots of spammy usage emails. We think we've done that but of course you can disable these emails entirely as explained above.

Thanks for reading and have a great day!


proxycheck.io moves to monthly subscriptions

Hello everyone, today we've got some exciting news about the service, we have changed business models slightly, instead of having you purchase an entire years worth of service up front we've changed to offering a monthly subscription.

There are many reasons for this change which we're going to go through in this blog post but before that, check out the new snazzy subscription buttons we've got!

Pretty subscription buttons

Before we get into the why, I just want to assure everyone reading, if you're one of the many people who purchased a years worth of service up front your service is not ending or being cut short in any way. Your entire full years worth of service will continue and once it ends then you can choose to change to one of our new subscription plans or return to the free plan we've always offered.

And to our customers using our free plan, don't worry we're not getting rid of it. The free plan is very important to us and as you know we do not segregate features so those of you using our free plan get every feature our paid customers do without exception.

So why did we do this?

Paying for a year up front is expensive

Firstly asking people to pay for an entire years worth of service up front is a big ask especially for startups and smaller developers. They may not have the funds to pay $100 or $200 dollars for the hundreds of thousands of daily queries their new site or app has. So turning to a subscription allows the cost to be spread out and thus entices more people to pay for a paid plan when they just didn't have the money before.

Hard to try before you buy with a costly 1 year commitment

Another reason is a lot of our potential customers told us, can I just get a monthly plan to try the service? and you may say couldn't they just use the free 1,000 queries to test it? Well yes, but if you're running a huge website that small number of queries will get eaten up in minutes (as we have seen with some customers already). So they would prefer to try the service with as many queries as they need but not break the bank in the process.

We undervalued our service to make the 1 year commitment not feel so expensive

The third reason is that we were significantly undervaluing the API, our low per year cost was to reduce the sticker shock. We believe we were underpricing by 2-3x on the plans we offered. Again making people pay for an entire years service in one go is a difficult proposition. If we leave money on the table that's less money we have to invest in new servers and technologies.

It allows our customers to upgrade sooner and as their needs require it

Since you were locked into a year commitment, if you needed more queries sooner than that you would need to purchase a larger paid plan sooner and that can get very expensive especially if you're seeing explosive growth but not explosive profit margins.

It allows us to offer bigger plans

Due to the high cost in buying plans that you had to pay 12 months up front for we were reluctant to offer very high query allotment paid plans, at-least on the website itself, we have accommodated some customers after they contacted us with larger needs than we presented on the site.

But now that we're charging monthly we can offer larger plans and that is exactly what we have done. Previously 240,000 daily queries was our largest paid plan and it cost $360 for 12 months. We now offer 2.64 Million queries per day for $100 a month. And we also now offer 320,000 queries for just $30 a month which is the same price users paid for only 240,000 queries previously. Spreading the cost certainly makes it much more attractive.

In-fact the first six plans we offer priced from $5 to $30 double the previous plans query allotment. So as your service grows and your needs grow you need only pay an extra $5 per month to get twice as many queries as you were previously.


Now we understand a switch like this may make people disappointed, our most affordable plan has gone from $15 to $60 when extrapolated over a years worth of service and any way you slice it, that is a big increase. The fact the up front cost has gone from $15 to $5 isn't so great when you look at the annual cost, we know this.

But I hope you can understand that we undervalued these plans on purpose because we did not have the resources at the beginning to offer monthly paid subscriptions, setting up reoccurring payments even with stripe is time consuming and we have been building out the entire service over the past year. In-fact we only launched paid plans very late last year, some 7 or 8 months after the site first started.

As you can imagine having to pay $60 up-front for only 10,000 queries wouldn't have been very attractive to anyone. Most of our customers actually purchased our middle tiers around $60-$120 because they needed those high query packages (40,000-80,000). Now customers can get those same plans for only $15 a month which makes them much more attractive up front.

If you have any questions about these changes or anything else related to our service please do email us as many of you have been doing, it is important that we hear your feedback. Thanks


Code refactoring, bug fixes and performance improvements

Over the past week we've been very busy around proxycheck.io not designing new user features but improving the existing code base in several ways.

  1. Refactoring the code so it's more compact, easier to read and edit in the future
  2. Fixing some edge-case bugs that have been discovered
  3. Improving performance of the API to lower latency

We're not completely done with these efforts but we have gotten pretty far. Almost all of the code that runs proxycheck.io has been improved in some way including the API itself, the dashboard, the web interface and more.

Already the API is able to answer queries incredibly quickly even as we're seeing several million queries a day. But during our testing we were able to shave 2 entire seconds off a 500 query lookup through our web interface with our new performance optimised code which when extrapolated over the millions of daily queries we handle results in huge time savings. (If you're curious it was a reduction from 22ms per query to 18ms per query with Proxy and VPN checks enabled.)

To assist us in tracking down potential code optimisations we have built a new tool called ocebot (a play on the words ocelot and bot) which makes automated queries to our API all day every day and supplies a special flag to the API when it does, this triggers the recording of the query at a deep architectural level on the node that handled ocebot's query.

The data about that query is then saved and over time statistics which we can analyse are formed so we can see at a glance what functions in our code are the slowest, what kinds of anomalies are slowing queries down some of the time but not all of the time and also to see what software optimisations we should be looking into for our overall architecture meaning the operating system, web server and databases.

Due to the way that ocebot works only the queries ocebot makes itself will be recorded so that there is no performance impact on the queries made by our customers. But it will be making queries that are similar to the ones made by our customers. Some of its queries will even serve malformed data so that we can see the performance impact of bad queries.

We hope this blog post was interesting. I'm sure we'll have some data to share on the exploits of ocebot in the future.


3rd Party Software and the proxycheck.io API

Since we started the API there have been a few enquiries about our policy on third parties producing software that integrates our API. And specifically do we allow you to then sell that software.

So we thought it would be a great idea to explain our policy on this and our general thinking. Firstly yes we completely allow it, you are free to make any software you want that integrates our API and we are more than happy for you to sell your software with our API integrated into it. You do not need to contact us first and ask permission. Simply make whatever it is you want to make.

Of course we would like you to provide some way for your users to input an API key from proxycheck.io into your software. That way they can manage their proxy/vpn checking from the dashboard on our website.

Our reasoning for allowing others to produce third party software is that it's just good business sense. We cannot possibly author all the software ourselves that would benefit from a good proxy detection API. So encouraging other developers to build our API into their software is in our best interest.

But it's also a good deal for 3rd party developers because you don't need to worry about running a complicated always available API. You can simply build and charge for the client software you make. And we're not doing any split revenue structure so you do not owe us a penny for anything you make which uses our API.

And in-fact to assist you in broadening your softwares audience we are featuring 3rd party software on our example page. So if you've made something that uses our API and you give your users the ability to input a proxycheck.io API key then we are more than happy to feature it on our example page, simply email us with a link to your code example, application, plugin, function or SDK - If it uses our API we'll feature it.

To that end we today added three new 3rd party applications to the examples page that utilise our API. We hope to add many more and perhaps even your application will be featured soon. Thanks for reading!


Back