Initial results of FLEX_BG feature.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi folks,

I've started playing with the FLEX_BG feature (for now packing of
block group metadata closer together) and started doing some
preliminary benchmarking to see if the feature is worth pursuing.
I chose an FFSB profile that does single threaded small creates and
writes and then does an fsync.  This is something I ran for a customer
a while ago in which ext3 performed poorly.

Here are some of the results (in transactions/sec@%CPU util) on a single
143GB@10K rpm disk.

ext4				1680.54@xxx%
ext4(flex_bg)			2105.56@xxx% 20% improvement
ext4(data=writeback)		1374.50@xxx% <- hum...
ext4(flex_bg data=writeback)	2323.12@xxx% 28% over best ext4
ext3				1025.84@xxx%
ext3(data=writeback)		1136.85@xxx%
ext2				1152.59@xxx%
xfs				1968.84@xxx%
jfs				1424.05@xxx%

The results are from packing the metadata of 64 block groups closer
together at fsck time.  Still need to clean up the e2fsprog patches,
but I hope to submit them to the list later this week for others to
try.  It seems like fsck doesn't quite like the new location of the
metadata and I'm not sure how big of an effort it will be to fix it.  I
mentioned this since one of the assumptions of implementing FLEX_BG was
the reduce time in fsck and it could be a while before I'm able to test
this.

-JRS
-
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Reiser Filesystem Development]     [Ceph FS]     [Kernel Newbies]     [Security]     [Netfilter]     [Bugtraq]     [Linux FS]     [Yosemite National Park]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Device Mapper]     [Linux Media]

  Powered by Linux