Best practice for variables shared between processes?

Alexander Clouter alex at digriz.org.uk
Tue Sep 21 08:32:42 BST 2010


Roger Burton West <roger at firedrake.org> wrote:
>
> I wish to have two processes, a "producer" (which will create files) and
> a "consumer" (which will do something with them), running
> simultaneously. Ideally the producer would push filenames to a list as
> it finishes producing them, while the consumer would shift them off the
> same list (or loop-wait, if the list is empty).
>
I usually prefer the 'maildir'esque approach where you have a 'queue', 
'processing' and 'processed' directory structure.  More exotic 
approaches can involve cups (yes the printer daemon) or you could get 
your producer to schedule the job by priming the 'at' daemon.

With the latter two you are not dependent on the filesystem and making 
sure everything is atomic...plus the job processing of at/cups is pretty 
reliable.

I probably would opt for 'at' (use cups if the consumer is on different 
nodes, distributed?) as conceptually you do not then need to put queuing 
functionality and logic into your scripts and it is dead easy to 
rescheule a job.

Cheers

-- 
Alexander Clouter
.sigmonster says: This system will self-destruct in five minutes.



More information about the london.pm mailing list