Brown trousers time :~
Lyle - CosmicPerl.com
perl at cosmicperl.com
Mon Oct 8 12:20:32 BST 2007
Jonathan Stowe wrote:
> On Mon, 2007-10-08 at 01:47 +0100, Lyle - CosmicPerl.com wrote
>> I'm concerned that I'll have to quickly write some C libraries for the
>> heavy traffic parts
>>
>
> For the most part, unless you need to do heavy mathematics or need to
> use a pre-existing C library then writing XS code is probably not the
> way to go - for simple operations the overhead of calling into the XS is
> going to offset the run time. The best thing to do is to write your
> code in Perl and then profile it - if profiling indicates that you have
> certain parts of the code that are taking up significant parts of the
> run time then you may have a candidate for optimisation - but you really
> don't want to even bother to think about micro-optimizations before you
> have completed the code.
>
Ok.
>> What's the mod_perl equivalent in Win32? I'm guessing PerlScript in ASP,
>> but is that faster? I can't find any benchmarks.
>>
> Well of course Apache runs fine on windows, so the answer is, er,
> mod_perl. If you are stuck with IIS then probably the very closest
> thing is to use ISAPI Perl which is part of the Activeperl installation
> - this creates an in-process and persistent perl interpreter and you
> have all the same caveats as with Apache::Registry programs. Classic
> ASP is a completely different model and using PerlScript there is just
> really sugar coating over the whole nastiness - you would need to
> completely rewrite nearly everything to use it. Also ASP has been dead
> for a few years now - being replaced by ASP.NET which is a much nicer
> model but less easy to use Perl with at all (but not impossible.)
>
I've used the ASP CPAN modules before, which has allowed me to work with
ActiveStates ISAPI PerlScript without the need to make huge
modifications to my code.
>> Would it be best to have separate databases (all in MySQL) for different
>> parts of the program? So that the database tables that are heavily
>> accessed are totally separate from those that aren't.
>>
> It won't make any difference whatsoever - the same engine has to deal
> with the requests anyway. Separate databases just makes your design less
> rational and will add overhead at the language layer. Splitting the most
> used tables across separate disc spindles will certainly help, but MySQL
> never used to have a facility to do that (I'm sure it will be available
> in the very next version to the one available.)
>
> /J\
>
Thanks.
Lyle
More information about the london.pm
mailing list