Lies, Damn Lies, and Benchmarks….
I get quite frustrated with benchmarks because they are very hard to perform properly, and even when you do them properly its very hard to get any useful data from them.
Its all very well knowing that a web server can do 4,000 connections per second, but what we really want to know is something along the lines of:
How many shoppers at my ecommerce site can one web server handle IF:
- 200 users are doing free text searches
- 100 users are in the HTTPS shopping basket
- 500 users are just browsing
- 2 hackers are trying to get in
- & 1 proxy server is spooling 10,000 connections to cache the site
Anyway after getting hassled by yet another customer for a Benchmark on our EC2 VA load balancing appliance I thought I’d take a quick crack at it:
Being Lazy my quick and dirty test involved using Apache Bench and fiddling with the HAProxy config in the same way that Willy did for his 100,000 TPS 10g test:
tcp-request content reject
So I fired up an m1 small instance for my test client and another one for my test load balancer and did a simple:
ab -n 10000 -c 120 http://220.127.116.11/
Actually thats not entirely true… I did lots of different Apache Bench command to work out what concurency level worked best but to cut a long story sort…
The result was a pretty respectable:
Requests per second: 5303.80 [#/sec] (mean)
Fair enough , so what about terminating HTTPS using Pound -> HAProxy ?
ab -n 1000 -c 120 https://18.104.22.168/
Gives what I think is pretty respecatable as well:
Requests per second: 155.98 [#/sec] (mean)
OK, So what about moving to a bigger instance like m1.Medium? :
Requests per second: 436.34 [#/sec] (mean)
Hmm much better….
And what about HTTP?
Requests per second: 9038.45 [#/sec] (mean)
But hang on I need to try moving the test client to m1.medium as well:
Requests per second: 12240.03 [#/sec] (mean)
Scorching… (guess my test client was too small, better go back and test small instance load balancer again with HTTP but large test client…)
Requests per second: 5745.43 [#/sec] (mean)
Yup, that does it!
Now these results may not look much compared to a bare metal hardware load balancer, but I think you’ve got to have a pretty big web site to start worrying about overloading the EC2 VA…
I did a bit of web trawling and these numbers seem to match up with the Zeus results RightScale got here
. However slighty different testing methodolgy aka. different results….
BTW: I like the pretty graphs they have so I will work on some actual throughput tests and update the Blog as I go….
PS. I also tested this with TCP_SPLICE but it made no difference (probably will on a throughput test..)
In conclusion this benchmark revealed absolutley nothing interesting:
“HAProxy is fast enough that you don’t need to worry about it in the real world…….”
I was so envious of the graphs on the RightScale site that I have temporarily posted the EC2 monitoring graphs from the test (they were on standard 5min Window so again I won’t pretend that they actually mean anything useful :-). )