[OT] benchmarking "typical" programs

Alistair McGlinchy amcglinchy at gmail.com
Thu Sep 20 09:46:53 BST 2012


On 20 September 2012 00:28, Rafiq Gemmail <rafiq at dreamthought.com> wrote:

> The tools did not matter so much as the sample data and the fact that I
> was able to compare runs against a fairly consistent architecture, datasets
> and execution paths (a function of the test data).
>

Well put.

I have no experience of splunk (but should really give it a go). My version
of this same problem at $work[-2] was apache logs from peak and
re-injecting these via wget for CPU (etc) load while running business
defined journeys with Selenium on an otherwise idle box for application
latency.

The hard parts are working out how your code is used in the real world,
which bits user complain about being slow and which bits break when
timeouts happen. [*]

/sarcasm  Don't forget you can always make your code faster on all but an
arbitrarily large finite subset of input via:
http://en.wikipedia.org/wiki/Blum's_speedup_theorem

Cheers,

Alistair


[*] Also hard is maintaining a meaningful state; if you normally get 10_000
new users created in your peak hour, how many hours can your run your peak
load before you need to restore the database back top its pre-peak state.


More information about the london.pm mailing list