On Fri, Mar 2, 2012 at 03:47, Ted Ts'o <tytso@xxxxxxx> wrote: > Two things I'd try: > > #1) If this is a freshly created file system, the kernel may be > initializing the inode table in the background, and this could be > interfering with your benchmark workload. To address this, you can > either (a) add the mount option noinititable, (b) add the mke2fs > option "-E lazy_itable_init=0" --- but this will cause the mke2fs to > take a lot longer, or (c) mount the file system and wait until > "dumpe2fs /dev/md3 | tail" shows that the last block group has the > ITABLE_ZEROED flag set. For benchmarking purposes on a scratch > workload, option (a) above is the fast thing to do. > Thank you Ted, I followed this and got the same result (read IOPS ~950 / write IOPS ~100) > #2) It could be that the file system is choosing blocks farther away > from the beginning of the disk, which is slower, whereas the fio on > the raw disk will use the blocks closest to the beginning of the disk, > which are the fastest one. You could try creating the file system so > it is only 10GB, and then try running fio on that small, truncated > file system, and see if that makes a difference. I created LVM on top of the RAID10 device, and then created a smaller LV(20GB), after that I took benchmarks against the very same LV with different filesystems, the results are interesting: xfs (read IOPS ~1700 / write IOPS ~200) ext4 (read IOPS ~950 / write IOPS ~100) ext3( read IOPS ~900 / write IOPS ~100) reisferfs (read IOPS ~930 / write IOPS ~100) btrfs (read IOPS ~1200 / write IOPS ~120) I got very bad performance from XFS (http://www.spinics.net/lists/xfs/msg08688.html) about two months ago, which was caused by known bugs of XFS, then I tried ext4 on some of my servers, it works very well until I got a new server set up with soft RAID10. What should I learn to understand what's happening? any suggestion is appreciated. -- Xupeng Yun http://about.me/xupeng -- To unsubscribe from this list: send the line "unsubscribe linux-ext4" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html