On Mar 13, 2007 00:04 -0400, Brian Davidson wrote: > Here's strace when running w/ 6GB of memory & with max_map_count set > to 20000000. It looks like that got rid of the ENOMEM's from mmap, > but it's still hanging in the same place... > > The backtrace seems to be essentially the same: > > (gdb) bt > #0 0x0000000000418aa5 in get_icount_el (icount=0x5cf170, > ino=732562070, create=1) at icount.c:251 > #1 0x0000000000418dd7 in ext2fs_icount_increment (icount=0x5cf170, > ino=732562070, ret=0x7fffffad6e06) > at icount.c:339 > #2 0x000000000040a3cf in check_dir_block (fs=0x5af560, > db=0x2b1011a88064, priv_data=0x7fffffad7000) at pass2.c:1021 > #3 0x0000000000416c69 in ext2fs_dblist_iterate (dblist=0x5c3f20, > func=0x409980 <check_dir_block>, > priv_data=0x7fffffad7000) at dblist.c:234 > #4 0x0000000000408d9d in e2fsck_pass2 (ctx=0x5ae700) at pass2.c:149 > #5 0x0000000000403102 in e2fsck_run (ctx=0x5ae700) at e2fsck.c:193 > #6 0x0000000000401e50 in main (argc=Variable "argc" is not available. The icount implementation assumes that the number of hard-linked files is very low in comparison to the number of singly-linked files. It uses a linear list to look up the hard-linked inodes. I suspect it needs some algorithm lovin' to make it into a hash table (possibly multi-level) if the number of links becomes too large in a given bucket. We could consider the common case to be a single hash bucket if that makes the code simpler and more efficient. Cheers, Andreas -- Andreas Dilger Principal Software Engineer Cluster File Systems, Inc. _______________________________________________ Ext3-users mailing list Ext3-users@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/ext3-users