Re: [PATCH 2/2] improve ext3 fsync batching

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Andrew Morton wrote:
On Tue, 19 Aug 2008 07:01:11 -0400 Ric Wheeler <rwheeler@xxxxxxxxxx> wrote:

It would be great to be able to use this batching technique for faster devices, but we currently sleep 3-4 times longer waiting to batch for an array than it takes to complete the transaction.

Obviously, tuning that delay down to the minimum necessary is a good
thing.  But doing it based on commit-time seems indirect at best.  What
happens on a slower disk when commit times are in the tens of
milliseconds?  When someone runs a concurrent `dd if=/dev/zero of=foo'
when commit times go up to seconds?

Transactions on that busier drive would take longer, we would sleep longer which would allow us to batch up more into one transaction. That should be a good result and it should reset when the drive gets less busy (and transactions shorter) to a shorter sleep time.

Perhaps a better scheme would be to tune it based on how many other
processes are joining that transaction.  If it's "zero" then decrease
the timeout.  But one would need to work out how to increase it, which
perhaps could be done by detecting the case where process A runs an
fsync when a commit is currently in progress, and that commit was
caused by process B's fsync.
This is really, really a property of the device's latency at any given point in time. If there are no other processes running, we could do an optimization and not wait.

But before doing all that I would recommend/ask that the following be
investigated:

- How effective is the present code?

It causes the most expensive storage (arrays) to run 3-4 times slower than they should on a synchronous write workload (NFS server, mail server?) with more than 1 thread. For example, against a small EMC array, I saw single threaded write rates of 720 files/sec against ext3 with 1 thread, 225 (if I remember correctly) with 2 ;-)

  - What happens when it is simply removed?
If you remove the code, you will not see the throughput rise when you go multithreaded on existing slow devices (S-ATA/ATA for example). Faster devices will not see that 2 threaded drop.

  - Add instrumentation (a counter and a printk) to work out how
    many other tasks are joining this task's transaction.

    - If the answer is "zero" or "small", work out why.

  - See if we can increase its effectiveness.

Because it could be that the code broke.  There might be issues with
higher-level locks which are preventing the batching.  For example, if
all the files which the test app is syncing are in the same directory,
perhaps all the tasks are piling up on that directory's i_mutex?

I have to admit that I don't see the down side here - we have shown a huge increase for arrays (embarrassingly huge increase for RAM disks) and see no degradation for the S-ATA/ATA case.

The code is not broken (having been there and done the performance tuning on the original code), it just did not account for the widely varying average response times for different classes of storage ;-)

ric


--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux