BerkeleyDB locking subsystem
Dirk Koopman
djk at tobit.co.uk
Tue Aug 1 09:47:18 BST 2006
On Mon, 2006-07-31 at 21:04 -0400, Matt Sergeant wrote:
> > The reason I'm
> > asking is that MySQL couldn't do the job.
>
> Sure it could, and has done in many places. You just can't blindly
> insert data of that volume into storage and expect to have the
> backend know beforehand "Oh, he's inserting gigs of data - we should
> partition sensibly then". You have to do this stuff explicitly. I
> don't know how BDB handles it, but I suspect it's similar.
Oh, and before I forget, remember that there are three backend storage
engines for MySQL: the standard (isam) based one (with known performance
issues on very large tables, especially indexes), InnoDB (quite a
serious database engine) and (guess what) Berkeley DB.
And then, if memory serves, there is the SAP backend in MySQL MAXDB (but
I may just be misremembering this as something different).
But I suspect that you are storing (actually "caching") serialised perl
hashes in a simple key/value pair arrangement and don't really want the
overhead of a "database" as such. Bearing in mind the overhead in doing
the serialisation, personally I don't think that the storage engine much
to do other than swallow it gracefully and regurgitate on demand.
Dealing with the swallowing (if that truly is the major activity) will
stress *any* storage engine because insertion tends to be one of the
slowest things you can do, especially if you wish to be ACID compliant.
As Matt says, you must have a plan...
Dirk
More information about the london.pm
mailing list