[OT] benchmarking "typical" programs

Nicholas Clark nick at ccl4.org
Thu Sep 20 12:35:18 BST 2012


On Thu, Sep 20, 2012 at 12:28:20AM +0100, Rafiq Gemmail wrote:
> 
> On 19 Sep 2012, at 12:09, Nicholas Clark wrote:

So, what I missed from this was:

I'm trying to get better benchmarks for the perl interpreter itself.

Lots of "one trick pony" type benchmarks exist, but very few that actually
try to look like they are doing typical things typical programs do, at the
typical scales real programs work out, so

    Does the mighty hive mind of london.pm have any suggestions (preferably
    useful) of what to use for benchmarking "typical" Perl programs?

> > Needs to do realistic things on a big enough scale to stress a typical system.
> > Needs to avoid external library dependencies, or particular system specifics.
> > Preferably needs to avoid being too Perl version specific.
> > Preferably needs to avoid being a maintenance headache itself.


Sadly things like regression tests *aren't* decent benchmarks because they're

a) trying to test corner cases, not common cases
b) trying to do this as efficiently as possible


As I'm still gainfully siphoning money from TPF to HMRC, I don't have an
employer from whom to derive test cases to turn into benchmarks.

http://news.perlfoundation.org/2012/09/improving-perl-5-grant-report-9.html

> Not sure if that helps.

Sadly not really. My fault for not asking a good enough question.

(Oooh. I managed to mention pony in context, quite unintentionally)

Nicholas Clark

PS: Ilmari, lunch!


More information about the london.pm mailing list