Re: background on the ext3 batching performance issue

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thursday 28 February 2008, Ric Wheeler wrote:

[ fsync batching can be slow ]

> One more thought - what we really want here is to have a sense of the
> latency of the device. In the S-ATA disk case, this optimization works
> well for batching since we "spend" an extra 4ms worst case in the chance
> of combining multiple, slow 18ms operations.
>
> With the clariion box we tested, the optimization fails badly since the
> cost is only 1.3 ms so we optimize by waiting 3-4 times longer than it
> would take to do the operation immediately.
>
> This problem has also seemed to me to be the same problem that IO
> schedulers do with plugging - we want to dynamically figure out when to
> plug and unplug here without hard coding in device specific tunings.
>
> If we bypass the snippet for multi-threaded writers, we would probably
> slow down this workload on normal S-ATA/ATA drives (or even higher
> performance non-RAID disks).

It probably makes sense to keep track of the average number of writers we are 
able to gather into a transcation.  There are lots of similar workloads where 
we have a pool of procs doing fsyncs and the size of the transaction or the 
number of times we joined a running transaction will be fairly constant.

-chris
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux