3.2 and 3.1 filesystem scalability measurements

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I've posted the results of some 3.2 and 3.1 ext4 scalability measurements and comparisons on a 48 core x86-64 server at:

http://free.linux.hp.com/~enw/ext4/3.2

This includes throughput and CPU efficiency graphs for five simple workloads, the raw data for same, plus lockstats on ext4 filesystems with and without journals. The data have been useful in improving ext4 scalability as a function of core and thread count in the past.

For reference, ext3, xfs, and btrfs data are also included.

The most notable improvement in 3.2 is a big scalability gain for journaled ext4 when running the large_file_creates workload. This bisects cleanly to Wu Fengguang's IO-less balance_dirty_pages() patch which was included in the 3.2 merge window.

(Please note that the test system's hardware and firmware configuration has changed since my last posting, so this data set cannot be directly compared with my older sets.)

Thanks,
Eric

--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux