Proebsting's Law

David Landgren david at landgren.net
Wed Dec 14 20:09:10 GMT 2005


Daniel Barlow wrote:
> On Wed, Dec 14, 2005 at 01:10:39PM +0000, Simon Wistow wrote:
>> How much optimisation can you do to the basic compiler cycle. Have there 
>> been amazing new innovations in Tokenising?
> 
> Unless the application under test is a compiler itself, I don't think
> that rewriting the lexer is going to make much difference to the speed
> of the generated code.
> 
> Have there been amazing innovations in code generation?  Well, there
> have definitely been significant changes in the CPU architectures on
> which the output runs, and I would be surprised if code generators had
> not changed quite a lot to take advantage of that.

Dynamo sounds quite fascinating. I hope a parrot-head has a look at this 
idea.

   http://arstechnica.com/reviews/1q00/dynamo/dynamo-1.html

For the link shy:

<quote>
Dynamo is an odd beast. It is, in essence, an interpreter for HP's 
PA-8000 instruction set that itself runs on a PA-8000 processor. That's 
right -- it interprets programs that could just as easily be executed 
natively on the same hardware. [...] What's surprising is that Dynamo 
"inadvertently" became practical. Programs "interpreted" by Dynamo are 
often faster than if they were run natively. Sometimes by 20% or more.
</quote>

It weaves its magic by assembling code paths into fragments, and then 
fragments of fragments over and over again. The end result is long, 
branchless runs of machine code... something that fits right in with 
modern CPU architectures and their long, fragile pipelines.

David
-- 
"It's overkill of course, but you can never have too much overkill."



More information about the london.pm mailing list