Data improvements between March and April

Image description

Today we wanted to detail a few significant data changes we've made over the current and previous month and how it impacts the data we serve through our API to you.

Firstly we've seen a large uptick in residential proxies being used around the internet to scrape websites and perform exploits. Residential proxy networks have outgrown the onion network (TOR) due to their users being paid to participate which differs from the free model that TOR uses.

On TOR anyone with an internet connection can launch what is called an exit node and others can use it to proxy their internet traffic for free. We've always detected TOR exit nodes right from the beginning. But now with money being involved these residential proxy networks are growing exponentially. We've managed to find flaws in a few of them which we've used to list their networks on our API.

In March we added the networks of two of the largest ones, we've had many customers email us about the fact we fail to detect a lot of these networks and so we became aggressive in our pursuit of their network nodes.

To put the scale of these networks in context In one case we were able to list 15,000 of 17,500 nodes on our API. That alone is four times the size of TOR. And while this has pleased our customers who asked for better indexing of these networks it has come at a cost: false positives.

The reason for the false positive rate increase is that these networks are relying on users to share their home and mobile networks and these are often dynamic. A single subscriber may change IP address upwards of 20 times per day in some circumstances which means unless we're constantly evicting addresses from our database we're going to have false positives.

To work around this problem we've begun evicting addresses at a much faster rate than we otherwise would, sometimes as little as 10 minutes depending on how dynamic we believe the addresses are. And for addresses we don't see multiple times within these proxy networks, they'll be evicted from our data within an hour.

When it comes to evicting dynamic addresses in general we have made significant progress in this area. For example, 90% of mobile (5G/4G/3G etc) proxies are removed from our data within 10 minutes. We've also categorised hundreds of address ranges we know to be shared via carrier-grade NAT (CG-NAT) due to the impact to the users of those networks being too great to list even if a proxy is inhabiting an address within one of those address ranges.

We've also expanded our VPN detection greatly, we increased the amount of hosting providers in our database by 19.4% between March 1st and April 8th, our biggest increase in such a short period. This was largely driven by investigating hundreds of address ranges and also by our customers supplying us with suspicious addresses and providers through our contact form (which by the way we read and reply to every message sent to us).

Finally, we've also been working heavily on our disposable email detection. We identified several issues in our internal systems that collect and store disposable addresses and by fixing those we've been able to vastly increase the number of domains we're adding to our database. We've also built some custom tools to obtain disposable domains from many of the most popular services automatically.

So that's the data update for you. We have also made a few updates to the Dashboard over the same time frame, you will now receive Country and Continent suggestions when creating rules that use those condition types making it easier to target locations without having to guess how we present their names in our API. This feature is driven by our new resources feature found here.

We hope you found this post interesting and we would like to thank all of our customers who have taken the time to write to us about emerging threats, new proxy networks, suspicious addresses and temporary email domains. We very much appreciate your effort to make our data better and more thorough.

Thanks for reading and have a lovely weekend!


Introducing new stat graph with local time and per-minute precision

Image description

Today we're launching the first in a series of visualisation improvements for the customer dashboard statistics tab. And specifically, we're starting with the graph of your daily query usage.

There were a few things we wanted to resolve with this redesign.

  1. Add proper timestamps along the bottom of the graph
  2. Improve the resolution of the graph so it shows per-minute trends
  3. Display times and dates in your local timezone
  4. Make the chart interactive through selectable timescales, resolution and zooming
  5. Easier to visually understand by switching to line-graphing from filled-graphing
  6. Better floating toolbar that follows your mouse cursor

So to explain where we've gone with the new chart lets first show you what it looks like displaying the previous 15 days of a test account which is set at a similar precision level to our previous chart which means it's displaying only a single data point per 24 hours of data.

Image description

As you can see every day is represented by very large and smooth lines. This is great for an overview but it doesn't show us the trends throughout a single day. That is where the new precision dropdown comes in and if we select to view this data in increments of hours we get a much different view.

Image description

Now we can see where our peaks and valleys are but the data is so precise over the total 15-day time scale that a lot of the information has become crushed down at the bottom. This is where our new zooming feature comes in, you can simply draw a box over a section of the graph and view that area like below.

Image description

Now we get a better picture of what we're looking at. And all of this happens on the page in real time, in-fact the chart loads incredibly quickly even with very large volumes of data. Below is a gif showing the speed and fidelity while zooming into a single day in the graph.

Image description

The resolution we're displaying in the above graphs is an hour but as you view smaller increments of time (such as the past hour or past day) you're able to select higher levels of precision right down to viewing per-minute query volumes.

At present we're limiting the past view to 30 days but we may increase this in the future as we're storing each customer's query data for 365 days at a per-minute resolution so it's entirely possible for us to offer 90 days or even more time.

We hope you'll check out the new chart within the Dashboard and let us know what you think. A lot of work went into this feature, especially the next-generation stat recording and storage which drives the per-minute data behind the chart.

Thanks for reading and have a wonderful weekend!


1 year on: Evaluating our move to AMD processors

Image description

One year ago in this blog post we shared the news that our infrastructure was changing, we would begin transitioning our servers from using processors made by Intel to processors made by AMD.

And specifically, we would be using the Zen 3 based Ryzen 9 5950X 16-core microprocessor from AMD. Before we detail the results of this transition we first wanted to mention that before we ever decided to use AMD we had customers emailing us and advocating for AMD's products. This was a common occurrence any time we announced the deployment of new Intel based systems.

So the tech communities enthusiasm for AMD has been growing since the first generation Zen products were released in 2017. We too had been monitoring AMD's many successes since the launch of Zen 1 and we consistently looked at the options available to us from our hosting providers.

When the Ryzen 9 5950X became available and offered everything we were looking for from core and thread counts to clock speeds and ECC memory support we knew it was time to transition away from Intel and as luck would have it our main European host was offering 5950X based servers in all their datacenters.

Concerns we had moving to AMD

But changing processor vendors isn't without risk. We're entrusting that AMD is maintaining full compatibility with the x86_64 instruction set and that all of our software will work and be essentially vendor-agnostic. If Intel were to become competitive in the future we may need to transition back to their products and we would want to do that without rewriting all of our software.

The final concerns we had were regarding the chiplet architecture of the Zen based processors, the performance of the memory controller and overall system stability as these Ryzen 9 processors are not validated for 24.7 usage as server processors.

Potential software compatibility problems

Our operating system on these servers is Windows Server 2022. On our previous Intel machines, it was a mix of Windows Server 2012 R2 and Windows Server 2022 as we were transitioning from 2012 R2 due to Microsoft ending support for it in late 2023.

When it comes to compatibility with our operating system there is only good news to report. The process scheduler of Windows Server 2022 is top-notch and fully understands the cache and core hierarchy of Zen 3 based microprocessors and can allocate program threads properly. Everything works and is stable.

Our software stack is mostly PHP, that's a mix of PHP 8.1 and 8.2. We also have some C and C++ auxiliary programs. All of our code and third party software just worked, we didn't need to recompile anything to increase compatibility or improve performance.

Hardware issues, stability problems and solutions

So that's the software side out of the way, when it comes to hardware you may note above we had some concerns about the memory controller. Here we did encounter an issue. All of our servers were equipped with 128GB of memory laid out as 4 x 32GB Dual-Rank 3200MHz ECC DDR4 UDIMM's from Samsung.

The CPU's memory controller and the memory modules we had installed are all rated to operate at 3200MHz but when operating at this speed (which by the way is JEDEC certified) we had instability on two of our four servers. This instability manifested as complete system lockups and crashes after several days of uptime.

Thankfully due to our cluster architecture, these crashes did not negatively affect the service and we didn't experience any downtime but it was a big problem that had to be resolved. In the end, we were able to make all our systems stable by changing the memory frequency from 3200MHz to 2666MHz which appears to be the recommendation when using all four slots on a Ryzen motherboard while using dual-rank memory modules.

Our opinion on this is that AMD's memory controller at least on this processor is weak. Intel's memory controllers on all their processors are much more stable at higher frequencies and when using multi-rank modules or when using multiple modules per memory channel.

Reducing the frequency of our memory modules did result in our memory bandwidth reducing by 16.66%. While frustrating as we are paying for much more expensive memory modules that are running at a lower speed than they should be, stability is paramount and not something we're willing to compromise on. We've experienced no more instability since making this configuration change.

Performance

The performance has been exactly as the graphs from our initial announcement showed. Wildly faster than our previous setup with a single one of these servers eclipsing all our other servers combined in not just CPU performance but storage I/O too.

When we switched processors we also eliminated the remaining Hard Disk Drives in our cluster by changing to enterprise U.2 NVMe flash drives from Samsung. These SSD's have been incredible for consistency as one of the issues we had with our previous architecture was the I/O wait times on the hard disk drives. We tried to mitigate that by using in-memory caching as much as possible but there are limits to how much data you can store in memory.

The 128GB of memory that we installed hasn't been that impactful from a quantity perspective but we knew that was likely to be the case before we migrated due to us working with a much smaller quantity of memory on our older servers and being frugal with the memory we've had in the past.

Lessons for the future and would we go AMD again

The main thing we learned is to scrutinise the spec sheets, the RAM frequency issue with dual-rank modules took us by surprise, thankfully it was an easy thing to resolve though diagnosing that as the root cause wasn't so easy.

Since we deployed these 5950X based systems we've also acquired a Zen 3 based 24 core EPYC system with 256GB of memory, this is being used for local research and development running many virtual machines and a mirror of our live physical infrastructure. We're very happy with this system and indeed all of the AMD systems we operate.

For the foreseeable future we see ourselves buying AMD based systems exclusively and looking at the state of Intels product roadmap this seems to hold true through to 2026. AMD has recently launched Zen 4 based processors, this is a highly performative suite of products utilising newer DDR5 memory and although we don't have any of these yet I could see us deploying one either for local development or as a remote node as part of our cluster in the near future.

We hope this post was interesting, we have been asked by a few customers how the upgrade went in specific detail and we hope this answers those questions. Thanks for reading and have a great weekend!


New API version, currencies now live

Today we've set our January 14th API version from beta to stable and it's now the default and recommended version of the API. With that, we would like to detail the new currency result in the API in a little more detail.

Image description

Above is a screenshot showcasing three different address results and the currency information for the countries to whom those addresses are assigned. As you can see we display the isocode, the name of the currency and the symbol. To activate the currency information simply supply &asn=1 with your requests.

We're still welcoming any corrections users may have for this data as there are some disputes about which symbols should be used to represent specific currencies even within a country's borders and so we're having to make some judgement calls on a case-by-case basis.

With this addition to the API it should now be easier than ever for you to localise your content for specific visitors, we've been asked a lot for more tools to pinpoint a user's local metadata beyond just location information and that's why we added timezones and now currencies, we are looking to expand the amount of data we offer within this same sphere in the future.

Thanks for reading and have a wonderful week!


Introducing Currencies to the API and documentation improvements

Image description

Today we've introduced a new version of the API dated 14-Jan-2023 (selectable from within your dashboard) which adds currency information to the API output. This is a beta feature which is why this new API version is not selected by default for users who have selected "Latest Version" from the API selector dropdown.

With this new API version when you check an IP and have the asn flag enabled (&asn=1) you'll receive local currency information for the IP you're checking. This includes the ISO code, name and symbol for the currency.

As it's a beta feature we are looking for your feedback including bug reports, incorrect data and so forth. With the new feature, we've also added a new flag called &cur=0/1 which lets you disable or enable this data in the API result. By default when using &asn=1 the currency information is shown and if you don't want it, you can supply &cur=0 to disable it.


In addition to the new currency feature, we've also added a new resources section to the website which we're beginning to fill with useful information that customers may need to better utilise our API or integrate our data with their services. The first resource we're launching with is geographical data, specifically lists of continents and countries that you can expect to see in our API responses.

You can view that new section on the updated API documentation page found here.

That's all the updates we have for you today, we hope you'll take the time to try the new API version and let us know what you think! - Thanks for reading and have a great weekend.


Our 2022 Retrospective

Image description

At the end of each year, we like to look back and discuss some of the significant changes that happened to our service and this year has seen some of the biggest changes since we started. So without further adieu let's go through the major milestones.

Back on January 2nd we launched managed rules which are like our previously available feature called custom rules but differ in that we (proxycheck.io) manage the conditions of the rule for you.

This feature enabled you to add rules that have changing data (for instance rules that target a specific organisation) and have us update the rule over time for you.

Image description

Next on January 19th we introduced easy plan alterations. This was a major change in the flexibility of our service enabling you to upgrade or downgrade your plan at any time and receive credits or deductions when changing plans, always to save you money.

Image description

On January 21st we added a major improvement to custom rules called continued execution which allowed your rules to work more like filters allowing multiple rules to work on a single query instead of stopping after a single rule was triggered.

Image description

All of the above concluded a very busy January but February was to bring the biggest change to our infrastructure since we first started. Namely our European Cluster Refresh on February 10th.

Image description

This was as the graph above illustrates a ginormous performance increase for our infrastructure delivering a 4.42x performance improvement when comparing our four new servers to our six previous ones. This performance increase was so great that we haven't needed to add more servers since February 2022 when our new infrastructure was deployed.

Image description

That upgrade not only delivered astounding CPU performance but also quadrupled our system memory and doubled our storage quantity. And speaking of storage we eliminated the last hard disk drives from our infrastructure with this upgrade, switching them for the fastest enterprise grade U.2 NVMe drives on the market and in doing so dramatically reduced our storage latency and increased our storage throughput.

Our next major feature was disposable email detection which launched 5 days later on February 15th. This was a major departure from our core service of detecting anonymous IP addresses but we felt it was close enough in line with what our service stood for to offer it.

And put simply, disposable emails were a nuisance that even we were dealing with daily and so we saw a great synergy between what we already offered and what detecting disposable addresses would accomplish. It has been a great feature for us that has opened our service up to new users.

On April 20th we introduced the ability for customers to pay invoices themselves manually. This was mostly brought about due to new Indian banking regulations that disallowed automatic subscription payments to international entities. We spruced up our invoicing UI during this update as well.

On May 25th we released a major update to our whitelist/blacklist feature that had been present since the Dashboard was first added and had seen very little love since that time. In fact, the original whitelist/blacklist feature was so early in our deployment that it was sticking out like a sore thumb and because of that we wanted to redo it.

Image description

One of the things we thought about during the redesign process was how can we make this feature even more useful while maintaining its class-leading ease of use. One of the things we did with the original listing feature is we gave you a large canvas in which you could put anything you wanted, essentially a big text document where you could write anything without needing to fiddle with UI controls or place things into pre-defined categories.

Our competitors often operate a whitelist/blacklist feature as a series of input fields where you enter one address per field requiring you to keep clicking to add new entries often with a small limit to how many entries you can have in total.

We never did it that way because we felt that was a very poor user experience. It requires far too much input from the user when what they want to do is just copy and paste a large list of addresses in one go.

So when it came time to refresh the interface we knew we wanted to maintain that ease of use however simply having a large text field as we had previously didn't look attractive. So what we did was containerise the lists into what we now call "Custom Lists".

Image description

Above is an expanded view of one such list with all the various controls. This new view enabled easier exporting, re-arranging, a nicer visual appearance, list naming and even creating lists that wouldn't be used for Whitelisting or Blacklisting at all but instead would only be utilised by Custom Rules, an earlier feature we introduced previously.

One other major thing it introduced was downloadable lists. We had customers tell us they liked using the Blacklist feature for example but they found it labour-intensive to keep it updated. Although we did add an API to manipulate White/Blacklists many years ago that does require coding experience.

Image description

And so as the above image illustrates we added a new Automated Mode button which when pressed allows the user to specify a website URL and our service will grab the content of the provided page and use it in their Custom List on a specified schedule. This has been a very well-received feature which is used extensively by our users since its introduction.

On June 9th we reached a Blog milestone with our 200th blog post. We made a post all about dark patterns in design (deceitful user-interfaces) and how we shy away from those practices. It made for an interesting topic.

On August 12th we launched a new site-wide changelog page allowing for easier tracking of new features, improvements and fixes across our entire website.

On September 2nd we added Time Zones to the API result a much-requested feature that lets you see an IP's local timezone.

On September 12th we finally added support for paying for service with a myriad of cryptocurrencies. A much-requested feature since the beginning of our service, this has been a net good for the service, we've processed many payments using Bitcoin, Litecoin, Ethereum, Tether, USD Coin and Dogecoin since its introduction.

Image description

Above are just a few of the crypto coins we now accept for service.

Then as if the month wasn't busy enough we launched a better way to handle your API key changing on September 17th. This was a requested feature for enterprises that like to rotate their API keys regularly but didn't want to have a second of downtime while they did it.

Image description

On October 17th we introduced Postcodes to the API. And much like the timezone feature mentioned above, this was a user-requested feature that we spent a lot of time bringing to the API, it was important for us that it was very accurate before we launched.

And that brings us to today. There are many things we launched this year that were not user-facing. For example, the backend system that we use to shuffle data between servers was completely rewritten. We managed to significantly reduce CPU usage dedicated to database synchronisation through this effort.

We also overhauled how we scan for new proxies and traverse VPN companies' infrastructure. Both for speed and reliability reasons. We also worked heavily with our partners to improve our methods of extracting data from their infrastructure whether they provide it to us directly through our import system or whether we go to them and download the data from their servers.

Another thing we focused heavily on this year was false positives. As mobile internet access and CG-NAT becomes ever more prevalent we have to do more in discovering shared addresses and scrubbing them from our database, we made huge strides in this area and as a result reduced our false positive fate significantly.

As another year comes to a close we got a lot of things done. The physical infrastructure upgrade, custom lists and the accepting Cryptocoin are personal highlights.

We hope everyone had a wonderful festive holiday and we look forward to bringing you even more great stuff in 2023!


Preparing for PHP v8.2

Image description

A few days ago the PHP foundation announced that PHP v8.2-stable will become available on December 8th. This is a major update to PHP which extends the language with new features while improving both performance and resource usage.

Some PHP History

Like most PHP releases this is important not just for us but for the wider web developer community as PHP still drives 77.5% of all websites as of November 2022 (source: w3techs.com). To put it simply, PHP continues to be a pretty big deal.

If you've been reading our blog for a while it should come as no surprise that we make extensive use of PHP for our website and API. This was a bet we made way back in 2016 that PHP as a language would continue to be available and actively maintained.

We looked at other projects at that time built on PHP such as WordPress and what we saw from other developers gave us confidence to move ahead with it. Now we're many years away from that decision and we don't at all regret it, the language has been great for us and continues to improve in every way.

The PHP Foundation

Just a quick sidebar, the PHP foundation just celebrated its first year in operation, you may be thinking that's a typo as PHP has been around since 1997 but it's a fact. Last year in November 2021 the PHP foundation was created and will sponsor the design and development of PHP going forward. I would say their first year has been a great success.

Our current PHP deployment & preparations

So with the history out of the way let's focus on PHP v8.2. Firstly we should mention that we like to stay current, we're using PHP v8.1.13 which was released only a few days ago and that's because every new PHP version brings with it performance improvements, code syntax updates, new features and lower resource usage. All of these things are great for our API as it means we get them for practically free on a regular basis.

Now I say practically because sometimes there are changes which break some prior code. The PHP developers sometimes need to break some functionality to move forward and this is where our preparation for PHP v8.2 comes into play.

Like when we updated to PHP v8.1 that we mentioned in this blog post almost exactly a year ago we need to test and validate that our existing code still runs and runs well. We've been doing that using the PHP v8.2 release candidates. So far it looks good and we believe we'll be able to update to PHP v8.2 quite quickly in December.

PHP v8.2's performance improvements

As we mentioned above one of the things new versions of PHP regularly provide is performance improvements and PHP v8.2 is no different. According to phoronix.com there is a healthy 2.5% average performance improvement over PHP v8.1. And we should note this test was performed very early in the v8.2 development cycle, in-fact it was conducted back in May 2022.

In our testing with the latest release candidate we've seen a better average for the specific functions we make use of in our code and when extrapolated across the billions of requests we answer those performance improvements matter. Essentially the faster our code can execute the more requests we can handle per second on each of our servers.

PHP v8.2's new features

The last thing we wanted to talk about was the PHP v8.2 features we're excited about.

Firstly, we're gaining readonly classes. Pretty self-explanatory, we'll likely make use of this for some of our code data fills that we want to be immutable. Secondly, we're getting a new randomizer class which is essentially a better random number generator with swappable engines. Thirdly, redactable parameters in backtraces. This allows for sensitive data to be removed from error reporting before it leaves the PHP runtime, improving privacy and security.

There are some other changes we like but they mainly deal with alterations to preexisting functions and are a bit too nuanced to explain briefly here.

So that's our PHP update. As we mentioned at the start PHP v8.2 launches on December 8th and we fully intend to be upgraded next month. There are some great performance wins in this release and PHP as a language is looking very healthy, we're looking forward to using all the new features in the coming year and seeing what PHP 9.0 brings!

Thanks for reading and have a wonderful week!


Celebrating Account Deletions

Image description

No, the title of this post isn't clickbait. We are indeed celebrating account deletions. It was one year ago on the 7th of October 2021 when we released our account deletion feature that allowed our customers to completely erase their accounts and all associated data from our service.

And it's also just over a year since we added automatic account deleting which is where we schedule the deletion of unused accounts after a year of inactivity (with a notification email sent to the account owner of course).

Since then we've had 38 users choose to erase their accounts themselves and we've conducted 8,083 automated deletions for inactive accounts (created between 2017-2022). The total number is staggering and shows the scale of the privacy burden we all face when we signup for services. There are entities out there that will never delete your account for you even when it's clear you're no longer using their service and we think that's just wrong.

When it comes to our automated deletions, we send our customers an email after a year of no activity to let them know we'll be erasing their account but if they need our services in the future they can just signup again, even with the same email address as we don't have it stored anymore once we erase their account.

So why aren't more companies performing automated deletions of abandoned accounts? Well.. as you are probably assuming, customer data is a very lucrative "product" and many companies are looking to increase their revenue by either remarketing to prior customers (sending you emails with advertisements for products and services) or selling your personal information to a broker who will then bundle your information up with many other individuals to be sold to yet more entities.

We don't think that's morally right, which is why our privacy policy is so clear about where your data is stored and who has access to it. We're not dealing in mile-long policies that only lawyers can understand. And it's also why we take an active role in minimising how much of your data is available to us, when it's clear you aren't using our services we delete your data, it's as simple as that.

If you would like to learn more about how we're treating our customers like human beings please take a look at our recent 200th blogpost which was all about dark patterns in software design and how we shy away from those deceitful practices.

Thanks for reading and have a wonderful weekend.


Introducing Postcodes!

Image description

Today we've updated our API (May 2022 branch) to support Postcodes. This has been an often requested feature going back multiple years and we're excited to finally launch it for everyone today.

Like all of our other location data accuracy is of the utmost importance to us and a difficult problem as the resolution of a location increases. We spent more than a year testing and verifying that the postcode data we'll be providing through our API is as accurate as our preexisting region and city data.

We also needed to be mindful of the security implications inherent in offering postcode data. Postcodes cover a much smaller physical area than city or town names and that invites privacy and security implications for those having their IP addresses checked.

To ease that concern we'll be providing more resolution than city and town names but less resolution than street names, essentially our postcodes will be for the general area only and not for individual streets which we feel is a happy medium.

So that's the update for today, we know some of you really wanted this feature and will no doubt act fast to integrate it into your sites and services. We look forward to contacting many of you today to let you know that postcodes are now available.

Thanks for reading and have a wonderful week.


High Availability Improvements

Image description

Today we're introducing some availability improvements that assist you in keeping your services fully available to your customers while utilising our API. The first of these changes is to do with API key management.

We've heard from some customers that they like to rotate their API keys on an annual basis. This is a good idea for security as it guarantees any leaked keys are no longer usable. But the problem has been if you change keys there will be a time frame where your software is still using your old key before you can input the new key.

This can result in denied queries or queries which don't utilise your custom rules or custom lists. For this reason, we've had a feature request which allows the old key to still be used after a new key is issued. And that is exactly what we've made available today. From now when changing your API key you'll be presented with the interface below which lets you specify how long your old key will remain usable.

Image description

You can still choose to revoke your previous key immediately upon generating a new key but in addition, you can now choose to keep your previous key active for between 5 minutes and 8 hours by selecting a time frame that fits you from the dropdown box.

In addition to this change, we had another feature request to improve our Dashboard usage API so that it indicates if a burst token is currently in use or not. We've added this too as shown in the example below you'll now see a binary 0 or 1 to indicate if a burst token is active.

{
    "Burst Tokens Available": 6,
    "Burst Token Allowance": 6,
    "Burst Token Active": 0,
    "Queries Today": 1641,
    "Daily Limit": 640000,
    "Queries Total": 1588717,
    "Plan Tier": "Paid"
}

This brings the same activity status for your burst tokens to the Dashboard usage API as you can view in the Dashboard itself like in the screenshot below.

Image description

So that's all the updates we have for you today. We just wanted to reiterate that both of these feature changes are the result of direct feedback from our customers. If you have an idea please don't hesitate to contact us as we may just add it for you!

Thanks for reading and have a wonderful weekend.


Back