On Fri, Aug 15, 2008 at 11:22:43PM +0530, Aneesh Kumar K.V wrote: > On Thu, Aug 14, 2008 at 07:18:17PM -0400, Theodore Tso wrote: > > On Thu, Aug 14, 2008 at 11:14:40PM +0530, Aneesh Kumar K.V wrote: > > > mballoc small file block allocation use per cpu prealloc > > > space. Use goal block when searching for the right prealloc > > > space. Also make sure ext4_da_writepages tries to write > > > all the pages for small files in single attempt > > > > Hi Aneesh, how are you testing your patch? I've created the following > > shell script: > > > > ------------------------------- > > #!/bin/sh > > # > > # small-files-frag-test --- test for small files fragmentation > > > > DEVICE=/dev/thunk/testbench > > > > mke2fs -t ext4dev $DEVICE > > mount -t ext4dev $DEVICE /mnt > > tar -C /usr -cf - bin lib | tar -C /mnt -xpf - > > sync; sleep 5 > > umount $DEVICE > > e2fsck -nfv -E fragcheck $DEVICE > > ------------------------------- > > > > ... and the results show roughly the same amount of fragmentation, and the > > same pattern. In fact, it's a ltitle worse (30% vs 25%). > > > > 37912 inodes used (11.57%) > > 11468 non-contiguous inodes (30.2%) > > # of inodes with ind/dind/tind blocks: 0/0/0 > > Extent depth histogram: 32638/5 > > 711894 blocks used (54.31%) > > > > > I have better results with the below patch on top of the patch i sent. > > 21156 inodes used (0.47%) > 158 non-contiguous inodes (0.7%) > # of inodes with ind/dind/tind blocks: 4/4/4 > 581216 blocks used (3.24%) > 0 bad blocks > 1 large file > And the fragmented inodes are all directories for which we don't use prealloc space debugfs: ncheck 12 Inode Pathname 12 /bin debugfs: ncheck 1987 Inode Pathname 1987 /lib debugfs: ncheck 7657 Inode Pathname 7657 /lib/python2.5 debugfs: ncheck 11602 Inode Pathname 11602 /lib/X11/xserver debugfs: ncheck 14279 Inode Pathname 14279 /lib/locale debugfs: ncheck 20615 Inode Pathname 20615 /lib/ooo-2.0/program debugfs: -aneesh -- To unsubscribe from this list: send the line "unsubscribe linux-ext4" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html