Realtime Inference Engine Improvements

In our previous update we shared with you the accuracy improvements we've made to our post-processing Inference Engine. This is the part of our service that searches for active proxies within our negative detections and from our Honeypots positioned around the world.

Today we've enhanced our real-time Inference checks which are performed at the same time as your queries. Prior to today only 1/3rd of our Inference Engines capability was utilised for real-time checks due to the time it takes for determinations to be made.

But we've now enabled 2/3rds of our Inference Engines checking capability for real-time queries. Based on our testing this means 95% of all Inference Engine based determinations on your queries will now occur at the point you perform your query.

That means you're much more likely to receive a complete result the first time you check an IP Address as opposed to us only detecting an IP as being a Proxy Server after you've performed your query and already received a negative detection result.

Prior to today about 65% of our Inference Engines positive detections were performed in real-time so this is a rather large increase in real-time detection rates. The final 5% will still be detected in post-processing with the entire Inference Engine enabled but we're hoping to further improve performance here also to be able to offer it in real-time at a later date.

While we have been able to tune the real-time Inference Engine considerably to allow for 2/3rds enablement there is a small latency increase on queries by roughly 70ms. The thing to keep in mind here though is these increases only occur for what would otherwise be negative detections. If an IP Address has already been run though the Inference Engine or otherwise detected in our dataset as a Proxy or VPN Server you won't incur this extra latency so think of it as an added accuracy tradeoff.

As always we're working to improve the performance of the API so we can answer your queries faster and support more queries per second, improving our real-time detection rate is one of the core benefits of improved API performance as our detection accuracy directly correlates to how much CPU time we can spend on each determination.

We hope you found this post interesting, thanks for reading!


Back