New server dedicated to Inference added!

Today we'd like to share with you a new server we've added to our family called STYX, it joins HELIOS, PROMETHEUS And HELIOS within our cluster but it has a dedicated job, inference.

Specifically, post-processing inference. See we've run into a bit of a problem, the volume of incoming traffic we're receiving is now so vast that processing all of the undetected traffic is turning into a huge burden for our main cluster nodes to handle. They have to run the API, do live inference, host the website, coalesce, process and synchronise statistics and while doing all that they also have to go through a literal mountain of addresses and figure out which ones are running proxies as part of our post-processing inference engine.

Below is an illustration showing our current system where each node handles its own incoming addresses and then simply updates the other servers about any new proxies it discovers amongst that data.

Image description

As you can see 50% of working time is spent on the API, as it should be. But 25% is spent on our post-processing inference engine. And recently this has meant we can only process 1/20th of the undetected address data we're receiving. This means if we find an IP that isn't already in our database there is only a 1 chance in 20 it will even get processed by our post-processing inference engine.

Now to fix this we've tried a lot of different things from precomputing as much data as possible and storing it on on disk, we've tried reusing inference data for common IP's (for example if two IP's are in the same subnet a lot of the prior computational work doesn't need to be done again). But all of this isn't enough because the volume of addresses being received is simply so high.

In addition to this we have a privacy commitment to our customers to only hold undetected IP information for a maximum of one hour. So we're up against the clock every time we receive an IP that needs to be examined by our inference engine.

So what is the solution? Well we've decided to invest in a new dedicated server with a lot of high performing processing cores and a lot of memory to specifically deal with this problem. We've ported our Inference Commander and Inference Engine software to this new standalone server where it can spend 100% of its time working on inference.

Below is an illustration showing how our three main nodes now have their addresses downloaded by our new server we're calling STYX before processing on its immense compute resources.

Image description

Already we've been able to move from processing only 1/20th of the addresses we're sent per day to processing 1/7th and we're confident we can increase it further until we're able to process every single address we're sent by carefully examining where the bottlenecks are and solving them. With this new server we can run it at 100% without worrying about other tasks suffering as it doesn't host our website or API, its sole purpose is inference.

The other benefit of this new server is that it frees up the main nodes to handle more customer queries, we've already seen improvements in query answer times during peak hours and that directly correlates to being able to handle more queries per second.

Thanks for reading and we hope everyone is having a great week!


The following is an edit to this post made on the 7th of Feb 2019.

As of this update our new server is now processing 100% of all the undetected addresses we have coming in through our post-processing inference engine software. A big jump up from the 1/7th we originally quoted when this blog post was made. Over the past several days we have been tweaking and gradually increasing the volume of queries and today we have hit a more than sustainable processing threshold allowing us to process all incoming data. We're very happy with this and so we thought an update was in order :)


Threats page now includes attack history!

This is a feature we've had customers ask for and one customer in particular (you know who you are!) has been very vocal about wanting it. Today we're happy to provide attack data on our threats pages. Below is a screenshot showing what it looks like. Image description On the far left the colour dots represent the unique properties we're monitoring that saw attack traffic from the shown IP Address. The dots are not random but generated from the unique identifier we place on each individual property. This means you can use them to discern if an IP is just attacking one thing or many different things across the web.

On the far right we're listing the kind of property that is being attacked. This may say SSH, FTP, Game or in the screenshot above, Website.

We've also grouped attacks together so we can display more unique attacks. For example if an IP is trying to brute force a login page on a website and tries hundreds of times within a short time frame those attempts will all be shown as a single entry.

This for us is a beginning point. You'll note in the top right it says BETA and that's because this is a new feature that we're still actively working on. We intend to add integration with this attack log to your dashboard's positive detection log.

Right now it doesn't display all the data we have internally. Instead we've made profiles for specific types of attacks (Vulnerability Scanning, Brute Forcing, Automated Registrations, Port Scans and so on) and we will be expanding the profiles to support more kinds of malicious traffic as time goes on. Our intention with releasing this feature is to make an attack log that is actually useful and legible that doesn't throw needless information in your face.

In the near future we will leverage this data directly into our v2 API endpoint providing you with more data about an IP Address that you can use to make better decisions on whether to block an IP Address from accessing your property. So keep an eye on our blog for future updates.

Thanks for reading and we hope everyone is having a great weekend.


Pricing Changes

Today we went live with our new pricing for 2019 and we'd very much like to discuss the new prices and our reasoning behind our pricing changes.

Firstly we want to make clear that these changes only affect new subscriptions. If you're already subscribed you won't see the price you pay change unless you cancel your subscription and start a new one.

Secondly we've reduced the prices on two of our Starter and two of our Pro plans. So if you are on these plans you can cancel and re-subscribe to receive the new lowered pricing. The only plans we've increased in price are two of our Business plans but again this only applies to new subscriptions.

Thirdly we've reduced our discount for paying yearly from 20% to 8.44%. Which means instead of receiving just over two months discounted from a 12 month subscription you now receive the equivalent of one months free service.

So let's discuss these changes starting with the reduced prices. Prior to today we had three Starter Plans priced at $1.99, $3.99 and $5.99.

The new pricing as of today is $1.99, $3.49 and $4.99 and you still get the same volume of queries. We've done this change because we see a lot of customers bunching up around the $1.99 and $3.99 prices and very seldom do customers take the $5.99 plan. So what we've done here is make the biggest Starter plan more affordable by a dollar which keeps all of the Starter plans under $5. We're also hoping to entice users who are paying $1.99 to upgrade to the $3.49 plan as you'll get twice the queries for just $1.50 more. We still think the $1.99 plan is a great introduction price and so we're not changing it at this time.

With our Pro plans we had plans starting at $7.99, $9.99 and $14.99. By far the $9.99 plan was the most popular, users very seldom purchased the $14.99 Pro plan. So to entice more users to upgrade to Pro we've reduced the first plans price from $7.99 to $6.99 and we've reduced the $14.99 plan to $12.99. The $9.99 plan performs really well and so we've decided to keep the pricing for it the same. And of course the query volumes are the same too.

For our Business plans we have increased the prices of two plans. The prior pricing was $19.99, $24.99 and $29.99. The new pricing is $19.99, $29.99 and $39.99. We've increased the middle plan price by $5 and the last plan price by $10. We've done this because all three plans are seeing heavy use by businesses and we feel the pricing was a little lower than it should have been for what is essentially commercial use.

For Enterprises we've not made any pricing changes at this time. We feel all the Enterprise plans are performing right where they should be so for now no adjustments are needed.

The final pricing change to discuss is the annual discount. Prior to today you would receive a 20% discount if you purchased a yearly plan instead of a monthly one. We did this at the start of our subscription service mostly as a gesture of good will to our customers who had been on our prior yearly purchases model (before we had subscriptions). When we switched to subscriptions our prices did increase and we didn't want customers who had paid for a year previously to have to pay a very high renewal when their time came. We've now moved into a time where there are no more pre-paid yearly customers, every customer today is on some kind of subscription pricing tier.

So we've reduced the discount from 20% to 8.44% which as noted above is the equivalent to one month. We think this is still generous and shouldn't affect new yearly subscribers too negatively. And of course if you hold a currently active yearly subscription you will not see any changes to what you pay on your stated renewal time because all of these pricing changes including the discounts only apply to new subscriptions.

We know when services adjust pricing it can be frustrating and that's why we are not changing your active subscriptions, doing it this way keeps you in control of what you pay and you can decide whether to start a new plan or not. If you have any questions about this change or anything else on your mind please contact us we love to hear your feedback.

Thanks for reading!


Inference Improvements

Image description

Today is our first big code update of 2019 and we'd like to share with you some of the details. Over the past month we've been focusing on improving our inference engine and specifically our ability to leverage the data you send us in our engines determinations.

Each day we are sent millions of IP Addresses to check by our customers and we don't only supply you with information about the addresses, we also feed them into our inference engine in an attempt to see patterns that can identify more proxies.

For example when we see an IP Address quickly making hundreds of connections across multiple customers properties that in itself can be a red flag. And thanks to the tag system we also have a lot of insight about what pages specifically on your properties the IP Address is attempting to access.

We've taught our inference engine to understand many different kinds of customer usage so that it can identify when an IP is acting benign such as accessing a static webpage or a blog post and when it's acting malicious like making automated login and registration attempts, spamming blogs and forums or brute forcing an input field etc.

With todays update we've improved this system so that it works much better with our customers who are using our API on their game servers and at server firewalls. We've also greatly improved its understanding of different webpages so that it can identify if a page is at a high risk for automated behaviour by bots (which commonly use proxies to remain anonymous while they carry out attacks).

This update went live today because we felt the improvement in detection rate was so good that we didn't want to wait until late February when we were planning to include this in a more comprehensive update. Instead we've pushed out this change today so that you can all benefit from it now.

That's it for this update, we hope everyone is having a great first start to the year!


New SourceMod Plugin

Last week we added a brand new plugin to our plugins page made by an independent developer called Sikari. The plugin allows you to integrate proxy checking from us and other providers into your Source game server.

So whether you're running a Counter Strike, Day of Defeat, Team Fortress or another SourceMod compatible game server you now have an easy way to block proxy and VPN users from from causing disruption and evading bans.

We're very thankful to Sikari for making the plugin and we're sure it will be a great benefit to all Source server operators.


Updated C# code example now supports our v2 API

Late last month just as we were shutting down for the holidays the developer of our C# example, hollow87 updated his proxycheck C# library to support our v2 API including all our latest features such as lat/long and city information.

You can check out the example here on our code examples page or here on the NuGet page for the project. Using this library you can build a very robust implementation of our API in your .net applications with very little effort.

Happy coding! 👩‍💻


Holiday Support Times

Hello everyone below are the dates where live support will not be available but email based support will still be available. We will of course be monitoring our site and the API throughout the holidays and you can still make purchases and cancel a plan during the stated time periods and everything will remain fully functional.

Email support only between:
Sunday, December 23rd to Wednesday, January 2nd.

To be clear, we will only be offering email based support beginning on Sunday, December 23rd until Wednesday, January 2nd at which point normal support services which includes our web based live support chat will resume. Thanks!


What a great year its been!

As we're now just a few weeks away from the new year we thought now is a good time to go through some of the milestones from this year.

If you would like to view the specific features we added or enhanced this year we recommend taking a look at our November 2018 newsletter. It's a great read and all of the changes listed within the newsletter link to blog post entries for more detail.

This year we saw impressive growth, at one point the volume of API queries we were receiving was doubling every month and we had to upgrade not just our API but our server hardware a few times to keep up with the projected demand. We also enhanced our network infrastructure significantly by using a world-wide Virtual Private Network (VPN) through our Content Delivery Network (CDN) provider which reduced latency for customers who are the furthest away from our servers.

As we mentioned above we upgraded our API several times. In-fact we launched our v2 API in January 2018 and since then the adoption rate has been incredible, we've updated you a few times on these numbers and today we're seeing 91% of all customers using the v2 API for their queries. This is up from 72% back in July.

In addition to launching the v2 API at the start of this year we also upgraded the API several times throughout the year reducing query latency by enormous amounts. You may remember when v2 launched its main feature was the ability to check multiple IP Addresses in a single query allowing efficient batch processing.

At the time the efficiency wasn't as high as it needed to be so we could only allow 100 IP's to be checked simultaneously and that could take up-to 90 seconds in some cases with all our checks enabled. In February we raised that limit to 1,000 and then in August we raised it again to 10,000. These kinds of improvements were made possible by redesigning core parts of the API to remove bottlenecks. Today you can check 10,000 IP's in around 25 seconds.

Even while we added more data to the API including improved provider, country and ASN information, the addition of IPv6 data and most recently, city, longitude and latitude data we've been able to increase performance. In-fact we now track more datacenter providers and more proxies than ever before and we spend more machine hours than ever processing incoming queries through our inference engine and yet the API is faster today than ever before.

Going into 2019 we intend to invest even more heavily in our unique technology. We recently did an infrastructure deep dive which resulted in a large influx of questions to our support staff from people wanting to know more and we intend to follow this up with an architecture overview breaking down some of our custom tools next year, stay tuned for that.

This year we also spent a lot of time improving the tools customers have to use on our website. We improved our Dashboard API's, we improved the web interface page several times and we introduced the new threats page. We also invested heavily in the customer dashboard making it more performative, improving the appearance, adding two-factor security and other enhancements. We'll keep improving this next year too.

So that's what happened with us this year, huge growth, lots of improvements to the site and our service. We want to thank all of the customers that took the time to try our service and we're especially thankful to those of you who took the time to write to us with ideas and suggestions, that input has often been invaluable, in-fact a lot of the features listed in our November newsletter were born from ideas pitched to us directly by customers.

We hope everyone has an enjoyable festive period and a great new year. We'll be making one last blog post directly after this one for our holiday support times so make sure to check that out, apart from that you should next hear from us in January 2019!


Updated Threats Page

When we added the Threats page in late August it was always our intention to create a home view where you could see live emerging threats. But we didn't quite know how we wanted it to look and what information we wanted to make available as we didn't want to give bad actors easy access to unique proxy data with which to launch malicious acts.

So we spent some time on it and decided that providing a map showing which countries the most attacks are coming from and a heavily shrunk down list of recent detections would be the best way to accomplish our goal. We'll keep working on adding more data as we feel it's appropriate but for now this is our starting point for our new threats home view.

Image description

If you've used the customer dashboard before then this view will be quite familiar to you. We've kept it simple, the map will show around 5,000 recent attacks by country with the addresses behind the attacks not displayed. Then below we will showcase 10 recent addresses that we've detected performing attacks and that also have active proxy servers running.

We hope you like the new home view, this is in addition to our specific address pages which will continue to be accessible. We've actually updated those pages with new city and specific location data which was also exposed through our v2 API update a few days ago.

Thanks for reading and we hope everyone is enjoying their weekend!


Issue with IPv6 VPN reporting

For around a 9 hour period between November 23rd and 24th all IPv6 addresses checked with our service with the VPN flag enabled were detected incorrectly as VPN's. IPv4 checks were not affected.

This was due to human error on our part when we accidentally entered in a VPN providers IPv6 range incorrectly. We have since corrected the problem and put a process in place to make sure this cannot happen again. We're very sorry to everyone affected by our mistake, we did not live up to our own expectations of high quality service in this instance.


Back