Earlier this month we told you about our new API testing bot called Ocebot which performs queries on our API and records the results so we can examine the areas of the API that need further optimisation.
I'm pleased to say that since our last update where an average response on a negative detection took 250ms (including network overhead) we've now got that down to less than half with average negative detections taking just 119ms (including network overhead).
Previously our average response time for a negative detection without network overhead was 78ms, our negative results used to be much faster than this but as our data set has grown 10 fold over the past year so has our ability to access our data quickly. But with the help of Ocebot we have been able to reduce our data access times down to 43ms earlier this week and then further down to 22ms just today by optimising our functions and the way we access our database of information. These numbers are after network overhead.
Going from 78ms to 22ms was done by tuning our functions, rewriting ones that weren't performative and multithreading more parts of our checking pipeline. Getting the best performance out of our multitasking system is a priority for us as we know there is still more we can do here.
The final thing we did was alter our network routing. We're now doing smart routing to our server nodes which has significantly reduced the latency you'll have when interacting with our services. We already use a CDN (Content Distribution Network) but now we're optimising the routes taken by your bits once they hit our CDN partner so that they touch our servers as quickly as possible.
Essentially we've created a wider on-ramp so that customer traffic can get to us faster by using better intermediary networks. This is the major reason behind the network latency going from an average of 250ms to 119ms but our work in reducing API response time is also helping here too.
We hope you're enjoying these updates, keeping the API as fast as possible is important because we're gaining a lot more data per day than we ever have previously. Data sources like our inference engine and honeypots are now providing more unique and useful data than our manual scraping efforts which has increased our database size considerably. Investing in making all that data as quickly accessible as possible is paramount to our service.
Thanks for reading and have a great day!