BerkeleyDB locking subsystem

Matt Sergeant msergeant at
Tue Aug 1 02:04:27 BST 2006

On 31-Jul-06, at 1:19 PM, Thomas Busch wrote:

> Do you know if sqlite 3.x can handle gigabytes of data ?

Yes, but you have to be smart about indexing and storage. And don't  
expect your data to be cached by the filesystem (i.e. you're hitting  
disk for every query, so you're limited by spindle speed, and two  
rotations minimum per read).

> The reason I'm
> asking is that MySQL couldn't do the job.

Sure it could, and has done in many places. You just can't blindly  
insert data of that volume into storage and expect to have the  
backend know beforehand "Oh, he's inserting gigs of data - we should  
partition sensibly then". You have to do this stuff explicitly. I  
don't know how BDB handles it, but I suspect it's similar.

If you get stuck, post on the sqlite-users list - there's a fair bit  
of experience there storing gigs of data.


This email has been scanned by the MessageLabs Email Security System.
For more information please visit 

More information about the mailing list