Requests Per Second

One of the most harmful things about people discussing the performance of web applications is the key metric that we use. Requests per second seems like the obvious metric to use, but it has several insidious characteristics that poison the discourse and lead developers down ever deeper rabbit holes chasing irrelevant gains. The metric prevents us from doing A/B comparisons, or discussing potential improvements without doing some mental arithmetic which appears beyond the capabilities of most of us.

Instead of talking about requests per second, we should always be focussed on the duration of a given request. It’s what our users notice, and it’s the only thing which gives us a notice.

I should prefix the remaining discussion here by saying that most of it does not apply to discussing performance problems at the scale of facebook, google or yahoo. The thing is, statistically speaking, none of you are building applications that will operate at that scale. Sorry if I’m the one who broke this to you, but you’re not building the next google :).

I should also state that requests per second is a really interesting metric when considering the throughput of a whole cluster. But throughput isn’t performance.

Diminshing marginal returns

The biggest problem I have with requests per second is the fact that developers seem incapable of knowing when to stop optimising their applications. As the requests per second get higher and higher, the improvements become less and less relevant. This lets us think we’ve defeated that pareto guy, while we waste ever-larger amounts of our employers’ time.

Let’s take two different performance improvements and compare them using both duration and req/s.

Patch Before After Improvement
A 120 req/s 300 req/s 180 req/s
B 3000 req/s 4000 req/s 1000 req/s

As you can see, when you use req/s as your metric, change B seems like a MUCH bigger saving. It improves performance by 1000 requests a second instead of that measly 180, give that guy a raise! But let’s see what happens when we switch to using durations:

Patch Before After Improvement
A 8.33 ms 3.33 ms 5 ms
B 0.33 ms 0.25 ms 0.08 ms

You see that the actual changes in duration in B is vanishingly tiny. 8% of one millisecond! Odds are that that improvement will vanish into statistical noise when compared to the latency of your network, or your user’s internet connection.

But when we use requests per second, that 1000 is so big and enticing that developers will do almost anything to get it. If they used durations as their metric, they’d probably have spent that time implementing a neat new feature, or responding to customer feedback.

Deltas become meaningless

A special case of my first complaint is that with requests per second the deltas aren’t meaningful without knowing the start and the finish points. As I showed above, a 1000 req/s change could be a tiny change, but it could also be an amazing performance coup. Take this next example:

Before After Diff
1 req/s 1001 req/s 1000 req/s

When expressed as durations you can see that it made a huge difference

Before After Diff
1000 ms 0.99 ms 999.01 ms

So 1000 requests per second could either be irrelevant, or fantastic. Durations don’t have this problem at all. 0.02ms is obviously questionable, and 999.01 ms is an obvious improvement.

This problem most commonly expresses itself when people say “that changeset took 50 requests per second off my application”. Without the before and after numbers, we can’t tell if that’s a big deal, or if the guy needs to take a deep breath and get back to work.

The numbers don’t add up

Finally, requests per second don’t lend themselves nicely to arithmetic, and make developers make silly decisions. The most common case I see this is when comparing web servers to put in front of their rails applications. The reasoning goes something like this:

Nginx does over 9000 requests per second, and apache only does 6000 requests per second!! I’d better use nginx unless I want to pay a 3000 requests per second tax.

When people do this comparison they seem to believe that by switching to nginx from apache their application will go from 100 req/s to 3100 req/s. As always, durations tell us a different story.

Apache Nginx Diff
6000 req/s 9000 req/s 3000 req/s
0.16 ms 0.11 ms 0.05 ms

So we can see that odds are you’ll only gain a 5% of a millisecond’s improvement when switching. Perhaps that improvement is worthwhile for your application, but is it worth the additional complexity?


Durations are a much more useful, and more honest, metric when comparing performance changes in your applications. Requests per second is too wide-spread for us to stop using it entirely, but please don’t use it when talking about performance of your web applications or libraries.