[londonperl] A difficult filesystem

David Alban extasia at extasia.org
Wed Jun 20 13:50:41 BST 2007

This may or may not help, but I'll submit it for consideration...

Gather information on all sets of links pointing to each inode.
Delete links so that each inode has only one directory entry.
Hopefully that will reduce the number of files to a point where rsync
will run.  On the target machine, recreate the 'extra' hard links.
Unless six million files is too much for rsync or either of your

On 6/20/07, Andy Armstrong <andy at hexten.net> wrote:
> I have an rsync based backup regime that results in a filesystem
> containing lots of inodes many of which have as many as fifty hard
> links pointing to them. I'm not sure how many files there are in
> total - the little script I have analysing it has been running all
> night and is up to 26,000,000 plus. debugfs tells me there are a
> shade over 6,000,000 inodes.
> I want to migrate the whole (ext3) filesystem onto another, larger
> device.
> Any of the normal hard-link-preserving copying methods run out of
> memory pretty early - for obvious reasons.
> So I copied the whole filesystem (dd if=x of=y) onto the new device
> with a view to growing it in place. Unfortunately parted deosn't like
> it either - "You found a bug in GNU Parted".
> While I work up a bug report for parted I've got a little Perl prog
> crawling the whole filesystem and building a SQLite DB with two
> tables - one for dir entries and one for inodes. When the finishes
> (sometime next week at the current rate of progress) I'll have a 30G
> or so database with more than 100 million rows.
> Then I should be in a position to copy the FS preserving hard links.
> Phew.
> While I'm waiting does anyone have tips for other tools that might be
> useful?

Live in a world of your own, but always welcome visitors.

More information about the london.pm mailing list