Re: delalloc is crippling fs_mark performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Eric Sandeen wrote:
> running fs_mark like this:
> 
> fs_mark -d /mnt/test -D 256 -n 100000 -t 4 -s 20480 -F -S 0
> 
> (256 subdirs, 100000 files/iteration, 4 threads, 20k files, no sync)
> 
> on a 1T fs, with and without delalloc (mount option), is pretty interesting:
> 
> http://people.redhat.com/esandeen/ext4/fs_mark.png
> 
> somehow delalloc is crushing performance here.  I'm planning to wait
> 'til the fs is full and see what the effect is on fsck, and look at the
> directory layout for differences compared to w/o delalloc.
> 
> But something seems to have gone awry here ...
> 
> This is on 2.6.26 with the patch queue applied up to stable.
> 
> -Eric

I oprofiled both with and without delalloc for the first 15% of the fs fill:

==> delalloc.op <==
CPU: AMD64 processors, speed 2000 MHz (estimated)
Counted CPU_CLK_UNHALTED events (Cycles outside of halt state) with a
unit mask of 0x00 (No unit mask) count 100000
samples  %        image name               app name
symbol name
56094537 73.6320  ext4dev.ko               ext4dev
ext4_mb_use_preallocated
642479    0.8433  vmlinux                  vmlinux
__copy_user_nocache
523803    0.6876  vmlinux                  vmlinux                  memcmp
482874    0.6338  jbd2.ko                  jbd2
do_get_write_access
480687    0.6310  vmlinux                  vmlinux
kmem_cache_free
403604    0.5298  ext4dev.ko               ext4dev
str2hashbuf
400471    0.5257  vmlinux                  vmlinux
__find_get_block

==> nodelalloc.op <==
CPU: AMD64 processors, speed 2000 MHz (estimated)
Counted CPU_CLK_UNHALTED events (Cycles outside of halt state) with a
unit mask of 0x00 (No unit mask) count 100000
samples  %        image name               app name
symbol name
56167198 56.8949  ext4dev.ko               ext4dev
ext4_mb_use_preallocated
1524662   1.5444  jbd2.ko                  jbd2
do_get_write_access
1234776   1.2508  vmlinux                  vmlinux
__copy_user_nocache
1115267   1.1297  jbd2.ko                  jbd2
jbd2_journal_add_journal_head
1053102   1.0667  vmlinux                  vmlinux
__find_get_block
963646    0.9761  vmlinux                  vmlinux
kmem_cache_free
958804    0.9712  vmlinux                  vmlinux                  memcmp

not sure if this points to anything or not - but
ext4_mb_use_preallocated is working awfully hard in both cases :)

-Eric
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Reiser Filesystem Development]     [Ceph FS]     [Kernel Newbies]     [Security]     [Netfilter]     [Bugtraq]     [Linux FS]     [Yosemite National Park]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Device Mapper]     [Linux Media]

  Powered by Linux