dom at happygiraffe.net
Fri Jun 23 10:35:29 BST 2006
On Fri, Jun 23, 2006 at 10:29:41AM +0100, alex at owal.co.uk wrote:
> Can anyone suggest some reading for optimising perl? (This is inherited
> code so dont blame me for lazy algorythms used :-)
Nicholas Clark did a talk, "When perl is not quite fast enough":
> I am basically munging large amounts of data from files or sybase
> databases and either creating large files or filling in other sybase
> We are hitting memory limits and of course always want things to run faster.
> I have had a suggestion to use Memoize but this only seems to be useful on
> stateless functions ie things which will return the same value every time
> the function is called with the same parameters. Right? This comes as
> standard for perl 5.8 so someone must find this useful.
> We are building huge hashes in memory which we might tie to disk, or (my
> preference) tie to a memcached database. This gets us around the 4Gb
> memory limit in the perl binary we are using by spreading the memory
> across two processes.
For stuff like that, it sounds like you should look at rewriting bits in
C. You would be able to make things more memory efficient without the
overhead of an SV for each item, perhaps?
More information about the london.pm