Running a high load website can be used as a test case for the negative effects of stress over long periods of time has on the human body. Mental pressure, hair loss, lack of sleep, burn out, social disconnection. Thankfully it does not have to be like this. Running a high load web CMS does not have to be stressful. Come with me as I setup and WordPress CMS to handle 1000 users per minute for less than the cost of lunch.
Note: This is not a HA (high-availability) configuration and is NOT production ready. Thought, it is a solid starting point for one.
Tools for the toolbelt!
The tools this solution leverages are pretty mundane. No fancy HAProxy cluster with multi-cloud CDN edge locations here. Simplicity is king. A basic, bottom dollar Lightsail instance running an Ubuntu Server. Total cost to run the minimal infrastructure: about $10/USD month. How much you will pay in data transfer depends on you.
Tech Stack:
- AWS IaaS provider
- Lightsail – Compute Instance
- Ansible, Terraform, and Gatling for testing
Install all the things (programs)
To make setting up and testing different configurations as easy as possible I created a simple Terraform configuration and used Ansible to install and configure the different. The three config under test are no caching, WordPress plugin caching and Nginx micro caching. Nothing to fancy there. If you are interested here is the GiT repo.
Caching, hoah, what is it good for!
Every time a user requests a page the request is processed by Nginx (the web server), then the interpreter process the WordPress PHP logic and outputs the HTML/JS/CSS. Passing the payload to Nginx to be returned to the requesting user. Two steps along this process can be speed up massively with caching. Caching keeps a copy of the response for a given amount of time, returning the stored copy rather than re-processing the entire request every time.
Two caching settings we will need to implement: WordPress via the WP Super Cache plugin and Nginx micro caching.
Since we will be running PHP 7.x OPcache is enabled by default so we do not need to worry about that now (unlike with PHP 5.x).
Party Picture time!
All configurations below have a base line of 1000 users over a 1 minute time length on Ubuntu 18.04 running latest Nginx, MySQL, and PHP as of 2018-11-02.
First up, lets look at a basic LEMP install with no caching at all.
Wow, that’s rough. But is gives us a base line. Looks like the machine can handle around 200 users per second before things start going bad.
Installing and configuring the WordPress caching with the below settings we test a type of application level caching.
So the application level caching brought down the failure rate; but not really enough to matter.
Lastly, I tore down the setup and stood up a fresh setup. This time with Nginx micro caching with a one minute life time.
Boom! Look at that! Sure, still some errors but WOW! 0 to 1000 linear increase in users over a minute and it ticks along like someones business. I noticed the max concurrent users was much lower given the Nginx cache. After digging in I found that it was due to the requests being completed and not clogging up the system.
Wrapping it up boss.
1000 users per second at the end of a minute and the little web server was ticking along like nothing was happening. I’d consider that high performance given the little effort it takes to configure.
So what does performance do for the business? Lowers cost of operation. The less time and hardware resources each requests requires, the more requests for a given piece of hardware can handle. This directly translates to lower operating cost for a given workload. Performance is king, within reason.
Leave a comment below and let me know what you think; or even your experience with performance tuning.
One thought on “Just how effective is web server caching? Very.”