Re: [PATCH 4/5] bcache: writeback: collapse contiguous IO better

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Oct 6, 2017 at 11:09 AM, Coly Li <i@xxxxxxx> wrote:
> If I use a 1.8T hard disk as cached device, and 1TB SSD as cache device,
> and set fio to write 500G dirty data in total. Is this configuration
> close to the working set and cache size you suggested ?

I think it's quicker and easier (and meaningful) to write 125-200G of
small blocks. (1/8 to 1/5)

Once you exceed 70% (CUTOFF_WRITEBACK_SYNC) you will get a **lot of
holes** because of write data skipping the cache.  And please note--
something I don't quite understand-- there is some kind of "write
amplification" happening somewhere.  When I write 30G, I get 38G of
dirty data; I wouldn't expect filesystem creation and metadata updates
to do this much.  At some point I need to trace and understand why
this is happening.

500G is probably still OK (1/2), but there starts to be a random
factor the more you write:

Basically, the more you write, the less-contiguous the data gets,
because writeback may have scanned through the volume once and written
out extents that are now "holes" in the middle of the data.  It'll
still be mostly contiguous but the effect is random.  (This is an
effect of the test scenario, not any of the code change).

Mike



[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux