e2fsck performance for large lost+found dirs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Running e2fsck on a filesystem with a large number of disconnected inodes
can take a long time to complete.   This can happen with Lustre, since there
may be hundreds of thousands of inodes in one directory, and extent corruption
can wipe out the whole directory (more on that in a separate email).

It is a very lengthy O(n^2) operation to reattach all disconnected inodes
to lost+found, which can take a long time if there are millions of them.


It would be much more efficient to keep a cursor pointing to the last leaf
block in the directory in which an entry was inserted. Since e2fsck is not
deleting entries from lost+found, and because the entry names are always
getting longer (due to scanning in increasing inode number order) there is
no value to search earlier blocks again. This would make lost+found insertion
O(1) and significantly improve e2fsck performance in this case. This could be
very fast since there is only the inode bitmap to traverse, and filenames are
"#ino" so only leaf block allocation and writes needed.

For generic libext2fs usage (e.g. Darrick's FUSE interface) where entries
may be deleted from a directory, the cursor could be reset to the block of
any deleted entry, if it was at a lower offset.

Darrick, at one time I thought you had a patch to fix this behaviour, but
I couldn't find it.  Maybe your patch was related to a similar O(n^2) search
problem with block allocation?


Any thoughts about how to fix this?  I was originally thinking that I could
just cache this into the "file pointer", but no such thing exists in the
ext2fs_link() interface, only the directory inode number is passed, and it
has an additional level of indirection in that it calls the block iterator
with a callback to process each leaf block separately.  It also has the
problem that any cache would be local to ext2fs_link() and not visible to
ext2fs_unlink().

I was thinking to start with ext2fs_link() calling ext2fs_dir_iterate2()
directly, just to avoid the first level of confusion in this code.  That
still doesn't allow passing a starting offset for the iteration, however.

Next, cache the block number into the link_struct state and skip the leaf
block searches if block number < cursor, but that still needs iterating
over all the blocks to skip to the last one.

Should I just call ext2fs_block_iterate2() in e2fsck_reconnect_file()
and keep the mechanics local to e2fsck?  I thought it might be a good
generic optimization, but the interfaces make this difficult.

Cheers, Andreas





Attachment: signature.asc
Description: Message signed with OpenPGP using GPGMail


[Index of Archives]     [Reiser Filesystem Development]     [Ceph FS]     [Kernel Newbies]     [Security]     [Netfilter]     [Bugtraq]     [Linux FS]     [Yosemite National Park]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Device Mapper]     [Linux Media]

  Powered by Linux