Re: xfs rm performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Christoph Hellwig, on 08/02/2010 11:18 PM wrote:
On Mon, Aug 02, 2010 at 11:03:00PM +0400, Vladislav Bolkhovitin wrote:
I traced what XFS is doing that time. The initiator is sending by a _single command at time_ the following pattern:

That's exactly the queue draining we're talking about here.  To see
how the pattern gets better use the nobarrier option.

Yes, with this option it's almost 2 times better and I see slight queue depth (1-2-3 entries in average, max 8), but the performance is still bad:

# time rm _*

real	3m31.385s
user	0m0.004s
sys	0m26.674s

Even with that XFS traditionally has a bad I/O pattern for metadata
intensive workloads due to the amount of log I/O needed for it.
Starting from Linux 2.6.35 the delayed logging code fixes this, and
we hope to enable it by default after about 10 to 12 month of
extensive testing.

Try to re-run your test with

	-o delaylog,logbsize=262144

to see better log I/O pattern.  If you target doesn't present a volatile
write cache also add the nobarrier option mentioned above.

Unfortunately, at the moment I can't run 2.6.35 on that kernel, but will try as soon as I can.

Thanks,
Vlad
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux