One of the biggest requested features from our customers over the past year has been the ability to check multiple IP Addresses within a single query. This feature has many benefits including reduced TLS handshake times, reduced resource usage from multiple webserver connections and decreased API latency through resource reuse.
To put it simply, it's a lot faster to perform one query with a 100 IP payload than it is to perform 100 queries with one IP payload each. We've tuned the new API for this multi-payload scenario and the performance improvement is dramatic as our benchmarks will show.
Before we get to those, we're considering this feature experimental. Due to this, the enhanced API is only accessible through /v1b/ with the b meaning beta. We're also supporting the submission of IP Addresses via GET and POST. You should use POST, the GET input is just there for testing in your browser.
We also want to be clear that this is not simply an abstraction endpoint that calls our API internally (and individually) we have literally gone through the API and rewrote every part of it to handle multiple checks. So this differs to how our web interface page has functioned (we will be transitioning that page to our new API soon).
So lets get to the benchmarks. Firstly we performed our testing multiple times with different addresses and averaged the results. There was not much deviation between tests. We did all our tests with 100 IP Addresses and we used TLS encryption.
IP Addresses that are NOT already in our data set with real-time Inference Engine turned ON
- v1 (current) API with 100 queries each with 1 IP Address: "query time": "65.35s"
- v1b (beta) API with 1 query containing 100 IP Addresses: "query time": "36.184s"
This is an impressive reduction, but watch what happens when we disable our real-time Inference Engine.
IP Addresses that are NOT already in our data set with real-time Inference Engine turned OFF
- v1 (current) API with 100 queries each with 1 IP Address: "query time": "45.221s"
- v1b (beta) API with 1 query containing 100 IP Addresses: "query time": "6.133s"
Now we're seeing a much larger decrease in query time. To be clear, our past Inference Engine cached data is still being processed here, so all past determinations made by the real-time and post-processing Inference Engine are still being utilised here but actual live determinations have been turned off.
Now finally lets take a look at positive detections. This is where the IP Addresses being tested (all 100) are already present in our data set but not within caches. So it's still searching all of our data but it's finding matches throughout our data set as opposed to never finding a positive detection like the tests above.
IP Addresses that ARE already in our data set with real-time Inference Engine turned ON or OFF
- v1 (current) API with 100 queries each with 1 IP Address: "query time": "22.372s"
- v1b (beta) API with 1 query containing 100 IP Addresses: "query time": "0.639s"
Here again we're seeing a huge decrease in query time. This is where we're seeing multiple TLS Handshakes and HTTP Connections being removed from the query overhead and our in-memory resource reuse has come into play.
So how do you start using the new API. We've made it really simple when you perform a query to the v1b API with multiple IP Addresses simply place multicheck in the IP field and then provide your multiple IP Addresses in a POST request called ips with each IP separated by a comma. If you want to use it in a GET request instead change the singular IP to multiple IP Addresses also separated by a comma. Below we've provided two examples.
GET request example
POST request example
You can still use your normal flags with these requests, for example ASN, VPN, Time, Node and we've introduced a new flag just for v1b called INF and as you probably can guess this disables our real-time Inference Engine so that you can perform multiple checks faster. To disable this feature provide &inf=0 in your request as by default it's turned on.
We're limiting multi-checking to 100 IP Addresses per query right now but we do intend to increase that limit after it comes out of beta. We hope you will all give it a good try and provide us some feedback which we welcome you to do at [email protected]
The last few things we wanted to mention about the new v1b endpoint is it does still support singular IP checks and the JSON result format is exactly the same as it has always been when perfoming a single IP check. You will see the new multi-check format only when performing multi-checks.
And finally since this is our new API we are working on it full time now. It has some added enhancements that our older API didn't have including better IPv6 support for VPN detection (since backported to /v1/ today) and we've moved around where certain checks are performed so you can now blacklist Google and Cloudflare IP Addresses, Ranges and ASN's from your dashboard and have those blacklists adhered to where as before they weren't.
Thank you for reading, we hope you're all having a great week and we look forward to hearing your feedback about this new feature.