Re: [PATCH 4/5] bcache: writeback: collapse contiguous IO better

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2017/10/6 下午7:57, Michael Lyle wrote:
> OK, here's some data:  http://jar.lyle.org/~mlyle/writeback/
> 
> The complete test script is there to automate running writeback
> scenarios--- NOTE DONT RUN WITHOUT EDITING THE DEVICES FOR YOUR
> HARDWARE.
> 
> Only one run each way, but they take 8-9 minutes to run, we can easily
> get more ;)  I compared patches 1-3 (which are uncontroversial) to
> 1-5.
> 
> Concerns I've heard:
> 
> - The new patches will contend for I/O bandwidth with front-end writes:
> 
> No:
>  3 PATCHES: write: io=29703MB, bw=83191KB/s, iops=10398, runt=365618msec
> vs
>  5 PATCHES: write: io=29746MB, bw=86177KB/s, iops=10771, runt=353461msec
> 
> It may actually be slightly better-- 3% or so.
> 
> - The new patches will not improve writeback rate.
> 
> No:
> 
> 3 PATCHES: the active period of the test was 366+100=466 seconds, and
> at the end there was 33.4G dirty.
> 5 PATCHES: the active period of the test was 353+100=453 seconds, and
> at the end there was 32.7G dirty.
> 
> This is a moderate improvement.
> 
> - The IO scheduler can combine the writes anyways so this type of
> patch will not increase write queue merge.
> 
> No:
> 
> Average wrqm/s is 1525.4 in the 3 PATCHES dataset; average wrqm/s is
> 1643.7 in the 5 PATCHES dataset.
> 
> During the last 100 seconds, when ONLY WRITEBACK is occurring, wrqm is
> 1398.0 in 3 PATCHES, and 1811.6 with 5 PATCHES.
> 
> - Front-end latency will suffer:
> 
> No:
> 
> The datasets look the same to my eye.  By far the worst thing is the
> occasional 1000ms+ times the bcache goes to sleep in both scenarios,
> contending for the writeback lock (not affected by these patches, but
> an item for future work if I ever get to move on to a new topic).
> 
> Conclusion:
> 
> These patches provide a small but significant improvement in writeback
> rates, that can be seen with careful testing that produces actual
> sequential writeback.  They lay the groundwork for further
> improvements, through the use of plugging the block layer and to allow
> accelerated writeback when the device is idle.
> 
[snip]

Hi Mike,

Thank you for the detailed information!

In your test, dirty data occupies 1/8 space of CACHEDEVICE, can I know
exact sizes of the cache device and cached device, then I will set up a
similar configuration on my machine, and try to reproduce the test.

-- 
Coly Li



[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux