Hello, We used ext3 for years at cyrus servers (even tuned them a lot [0]), then changed to XFS **aggressively tuned for small files** [0] and are being happy with it. But we are using plenty ram and cores servers and tuned SAN FC storage and now even a tuned custom WAFL SAN FC storage, with lvm. At this scenario, XFS parallel concepts shine. After written the article [0], we adopted the Allocation Group size of 256 MB for new servers and it increased final performance (approximately our average user quota) at parallel loads, with good write performance of *small files*. There is a tricky trade-off between number of AG and cpu load and parallel iops and cpu i/o contention and file size and write X read profile. Key details of XFS is that it creates new directories at their own AG when possible, and has delayed allocation, spreading load and parallelism and reducing fragmentation. We are not using pre-allocation. Also, an eventual xfs verify is faster than a ext3 one. As we use Debian Stable (mostly) and RH (at some legacy email servers soon to be decomissioned this year) other fs were not considered nor tested for production deployments. Currently, we are stress testing a cyrus murder on Debian, on a Xen VMs, using VHD vdisks, containing a XFS, for comparisons with raw lvm vdisks. Good luck. Andre Felipe Machado [0] http://www.techforce.com.br/news/linux_blog/lvm_raid_xfs_ext3_tuning_for_small_files_parallel_i_o_on_debian
---- Cyrus Home Page: http://www.cyrusimap.org/ List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/