Re: [PATCH 4/5] bcache: writeback: collapse contiguous IO better

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, Oct 1, 2017 at 10:23 AM, Coly Li <i@xxxxxxx> wrote:
> Hi Mike,
>
> Your data set is too small. Normally bcache users I talk with, they use
> bcache for distributed storage cluster or commercial data base, their
> catch device is large and fast. It is possible we see different I/O
> behaviors because we use different configurations.

A small dataset is sufficient to tell whether the I/O subsystem is
successfully aggregating sequential writes or not.  :P  It doesn't
matter whether the test is 10 minutes or 10 hours...  The writeback
stuff walks the data in order.  :P

***We are measuring whether the cache and I/O scheduler can correctly
order up-to-64-outstanding writebacks from a chunk of 500 dirty
extents-- we do not need to do 12 hours of writes first to measure
this.***

It's important that there be actual contiguous data, though, or the
difference will be less significant.  If you write too much, there
will be a lot more holes in the data from writeback during the test
and from writes bypassing the cache.

Having all the data to writeback be sequential is an
artificial/synthetic condition that allows the difference to be
measured more easily.  It's about a 2x difference under these
conditions in my test environment.  I expect with real data that is
not purely sequential it's more like a few percent.

Mike



[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux