This is a quick entry about benchmarking your nginx and showing how caching makes a great performance boost.
We are hosting the web page that you are reading on a micro type instance on EC2. It is pretty cheap (we have bought a reserved instance since we know that we will be running the server all year round) which works out around 10 - 15 USD per month. All our sites are Rails based and we are serving them through nginx and passenger.
The experiment that I wanted to run was simple: see how many requests can the server cope with:
- When a page is cached
- When the same page is not cached
I selected a page with text and some graphics (the benchmark seemed to have ignored the graphics part). The total size was about 1.7kB. The page was almost entirely composed by static HTML, except for a couple of links that the application had to put together (and of course the layout).
I decided that benchmarking should be done via another ec2 instance based on the same availability zone (US east 1) in order to minimize network latency. For benchmarking I used ab. I am sure there are more sophisticated tools out there, but for a simple test this is fine.
So I run first the test with the cached page:
ab -kc 500 -n 5000 http://mysite.com/statictest.html
The above command means that I asked for 5000 requests to the server while at any time 500 of them would run simultaneously (i.e. the benchmarking tool would issue requests even if previous requests had not been served yet). If I did not specify the 500 bit, then it would execute all requests serially (which for 5000 in total might take a while).
The results are:
Server Software: nginx/0.8.54 Server Hostname: mysite.com Server Port: 80 Document Path: /statictest.html Document Length: 1699 bytes Concurrency Level: 500 Time taken for tests: 0.452 seconds Complete requests: 5000 Failed requests: 0 Write errors: 0 Keep-Alive requests: 5000 Total transferred: 9673337 bytes HTML transferred: 8566203 bytes Requests per second: 11061.60 [#/sec] (mean) Time per request: 45.201 [ms] (mean) Time per request: 0.090 [ms] (mean, across all concurrent requests) Transfer rate: 20898.95 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 2 4.7 0 17 Processing: 3 29 52.5 16 297 Waiting: 3 27 47.2 15 245 Total: 3 31 53.7 16 297 Percentage of the requests served within a certain time (ms) 50% 16 66% 20 75% 24 80% 25 90% 36 95% 219 98% 233 99% 250 100% 297 (longest request)
Plenty of prety numbers, but the ones that I keep in mind are the Requests per second: 11061.60 [#/sec] (mean) and the Time per request: 45.201 [ms] (mean). So my dirt cheap EC2 slice can serve 11k requests in one second and each one takes around 45ms (actually 90% of them were served within 36ms!).
Now if I repeat the same experiment (i.e. same page, same concurrent requests, same total requests) I get about ten times slower response. If you run the experiment with a slightly more realistic scenario then the drop in performance is dramatic. All the time is consumed in the application putting together your view.
Here are the results:
Server Software: nginx/0.8.54 Server Hostname: mysite.com Server Port: 80 Document Path: / Document Length: 173 bytes Concurrency Level: 500 Time taken for tests: 3.183 seconds Complete requests: 5000 Failed requests: 408 (Connect: 0, Receive: 0, Length: 408, Exceptions: 0) Write errors: 0 Non-2xx responses: 4592 Keep-Alive requests: 4592 Total transferred: 2359104 bytes HTML transferred: 1487200 bytes Requests per second: 1570.80 [#/sec] (mean) Time per request: 318.310 [ms] (mean) Time per request: 0.637 [ms] (mean, across all concurrent requests) Transfer rate: 723.76 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 1 2.3 0 13 Processing: 1 153 357.5 4 2816 Waiting: 1 153 357.4 4 2816 Total: 1 154 358.5 4 2829 Percentage of the requests served within a certain time (ms) 50% 4 66% 40 75% 48 80% 53 90% 705 95% 916 98% 1414 99% 1479 100% 2829 (longest request)
Notice that some requests failed. Also the page is much smaller (just 173 bytes - which means that my comparison is not that great - if anything it is optimistic). Finally we have a drop of times 10 in the number of pages that our server managed to serve.
In any case, it shows how even a very light html page will be served at a much slower speed if not cached.