error/exception aggregation during a "big" dbic load

Yan Fitterer yan at fitterer.org
Tue May 8 23:13:19 BST 2012


One obvious approach would be to validate the data in a separate step. 

Sent from my iPhone

On 8 May 2012, at 22:03, Bob MacCallum <uncoolbob at gmail.com> wrote:

> Hello - I would greatly appreciate any suggestions for how to keep track of
> errors while loading a big chunk of data into a dbic schema (in a
> txn_do(...), from a file).
> 
> The problem is that I don't want to throw an exception as soon as I
> encounter the first error (for example a user record missing its required
> email field) - I would rather create some placeholder data and carry on
> loading stuff (for example loading objects that belong to this user) so
> that I can dump a list of the errors right at the end before rolling back
> the transaction.  The reason I want this is because the input file is
> provided by a third party and it would reduce the number of emails back and
> forth if we can send them as many corrections as possible at one time.
> 
> Being a bioinformaticist and not a pro coder, I didn't know what to google
> for (I tried what I could) so apologies if the answer's out there on CPAN.
> I can't be the only person who's ever wanted to do this.
> 
> thanks,
> Bob.


More information about the london.pm mailing list