For this blog post, we would like to go over our architecture design for proxycheck.io and explain some of the decisions we've made along the way in building the service. To start with what exactly is a monolithic architecture and what is the alternative approach?
Monolithic in software pretty much means running all your services on one or more beefy servers as opposed to breaking out your services into what is commonly referred to as microservices and having them distributed across many smaller servers or even running them on what is known as "serverless" or "edge" computing infrastructure. The idea behind the microservices approach is you remove a lot of overhead like managing an operating system, you only instead manage the specific application that you've developed.
The other benefit is you can scale up microservices horizontally meaning if you need more resources for an application you can simply spin up another copy of the microservice on another system and load balance between them.
This approach however does have some caveats. As the amount of microservices you have increases the volume of network activity between all the services in your infrastructure increases. After all each service needs to obtain, process and share data with the rest of your infrastructure and the more servers you have sharing that burden the more there is to keep synchronised.
Database traffic is often overlooked when people turn to these services but it can become substantial to the point that you cannot expand horizontally anymore because there aren't enough resources to keep all your services synchronised. In addition to this complexity, there is also a creeping increase in cost from all this overhead which can overshadow the initial costs you thought you would incur for the resources you're using to serve customers.
Some good examples of how other services moved from microservices to monolithic would be Dropbox or even Prime Video which recently shared an interesting blog post about how they reduced their costs by 90% when moving from microservices to a monolithic architecture. And yes that is Amazons Prime Video who were using Amazon's AWS services to operate their microservices.
To quote Amazons Prime Video:
"Moving our service to a monolith reduced our infrastructure cost by over 90%. It also increased our scaling capabilities. Today, we’re able to handle thousands of streams and we still have capacity to scale the service even further."
So not only did it save them money but it also increased their ability to scale and helped them to support more users with fewer servers.
We have used a monolithic architecture since the very beginning because although we identified the benefits of microservices and specifically the use of AWS's EC2 and Azure clouds to scale rapidly we identified many drawbacks. Performance for these services on an individual level is not high that is to say individual requests have poor performance.
To put it another way, this microservices approach is akin to flying 2,000 hot air balloons instead of having 2 jumbo jets. Sure you can have double the amount of people across those hot air balloons but the time it takes to get to their destination will be much longer.
And that was and continues to be the crutch of the microservices model that has kept us not only on our monolithic trajectory but our bare metal one too. When we rent servers we are the only tenant and we get to pick the hardware, we often pick the fastest hardware available and we have been replacing our older servers with new ones that offer 3x to 4x their performance.
Meanwhile, if you look at the past 5 years of "serverless" computing like EC2 the performance has remained pretty much the same driven by the service provider's desire to maximise the amount of customers per unit of compute resource available.
To us, speed matters. If you compare for instance our customer dashboard to companies that use cloud providers and microservices you'll find ours loads instantly and populates with data in the blink of an eye while some of even the largest companies like OVHCloud have you sit for upwards of 10 seconds for their customer dashboards to populate with information.
Now we don't think that microservices have no use at all. There are certainly workloads that benefit from this approach especially data processing that needs a lot of workers and doesn't need instantaous results and any workload that can be accelerated by dedicated fixed-function silicon for example video transcoding, network encryption/decryption, packet routing. All of these tasks make sense for the horizontal growth approach that serverless/microservices can provide.
But for anything customer-facing where speed and latency are paramount, we just don't see the same benefits, users get frustrated waiting for things to load, the performance of the service isn't great overall, the costs can spiral out of control and the overhead with regards to data synchronisation can be crippling.
We hope this was interesting, we wanted to go a bit more in-depth about this topic due to our recent infrastructure posts which spurred some customers to message us and ask about why we don't use cloud providers and instead continue to use bare metal.
Thanks for reading and have a wonderful weekend.