Re: [PATCH] bcache: back to cache all readahead I/Os

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2020/1/15 8:39 下午, Nix wrote:
> On 15 Jan 2020, Coly Li stated:
> 
>> I have two reports offline and directly to me, one is from an email
>> address of github and forwarded to me by Jens, one is from a China local
>> storage startup.
>>
>> The first report complains the desktop-pc benchmark is about 50% down
>> and the root cause is located on commit b41c9b0 ("bcache: update
>> bio->bi_opf bypass/writeback REQ_ flag hints").
>>
>> The second report complains their small file workload (mixed read and
>> write) has around 20%+ performance drop and the suspicious change is
>> also focused on the readahead restriction.
>>
>> The second reporter verifies this patch and confirms the performance
>> issue has gone. I don't know who is the first report so no response so far.
> 
> Hah! OK, looks like readahead is frequently-enough useful that caching
> it is better than not caching it :) I guess the problem is that if you
> don't cache it, it never gets cached at all even if it was useful, so
> the next time round you'll end up having to readahead it again :/
> 

Yes, this is the problem. The bypassed data won't have chance to go into
cache always.


> One wonders what effect this will have on a bcache-atop-RAID: will we
> end up caching whole stripes most of the time?
> 

In my I/O pressure testing, I have a raid0 backing device assembled by 3
SSDs. From my observation, the whole stripe size won't be cached for
small read/write requests. The stripe size alignment is handled in md
raid layer, even md returns bio which stays in a stripe size memory
chunk, bcache only takes bi_size part for its I/O.

Thanks.

-- 

Coly Li



[Index of Archives]     [Linux Kernel]     [Kernel Development Newbies]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite Hiking]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux