Real time Inference Engine

As we mentioned in our previous blog post about Honeybot our machine learning inference engine has become so fast in making determinations about IP Addresses that it exhausted our backlog of negative detection data and this subsequently slowed down its self iteration considerably.

We've now reached a point where the algorithm we have is able to consistently make an accurate assessment of an IP Address in under 80ms and so we've decided to add the inference engine to our main detection API for real time assessment.

What this means is when you perform a query on our API we now have our inference engine examine that IP Address at the same time as our other checks are being performed. Our hope is that we can provide a more accurate real time detection system instead of only fortifying our data after a query is made.

Our inference engine is still doing exhaustive testing on IP Addresses that have negative results to find proxies we weren't aware of and our system still performs checks on the surrounding subnet when it feels confident there are other bad addresses in that neighbourhood but all these checks are still being done after your query in addition to the more targeted checks we're now doing in real time.

As of this post the new real time inference engine is live on our API and being served from every node in our cluster, one thing you should expect is slightly higher latency. Previously our average response (after network overhead is removed) was 26ms, with real time inference that average has increased to 75ms.

We feel this is a good trade off because we're continually working to reduce latency while also introducing more thorough checking, so we're confident we can get back down below 30ms soon and we will use those extra response time savings to introduce more types of checks.

Thanks for reading and have a great day!


Back