On 2019/2/16 7:20 下午, Andreas wrote: > Hello Coly, > Hi Andreas, > I agree with you wholeheartedly, which was the reason for my patch and > email. But you seem to have gotten it the wrong way around. > You see, ever since > https://github.com/torvalds/linux/commit/b41c9b0266e8370033a7799f6806bfc70b7fd75f > was merged into bcache in late 2017 any IO flagged as REQ_RAHEAD or > REQ_BACKGROUND is simply skipped (bypassed) and no longer considered for > caching at all, regardless of IO pattern. > Yes you are right, for normal readahead or background requests, they are not fully about random I/O patterns. > If what you say holds true, it sounds like that patch was wrongfully > merged back then, as it has introduced the behaviour you do not want > now. If you believe it makes an exception for sequential FS metadata, I > would very much like you to review that patch again, as that is not the > case. > > My patch on the other hand aims to revert this change by default, so it > is all about IO patterns again, but make it configurable for users who > want this new behaviour. > [snipped] Most of such requests are for speculative purpose by upper layers, and a lot of such requests won't be used indeed, therefore we won't have them in cache device, unless they are for metadata. Such metadata blocks occupy much less cache device space than normal readahead or background requests, it is OK for us to have them. If you find there is anything I may wrongly express, that is from myself; and if you find anything reasonable, that is from Eric and bcache original author Kent :-) I agree with Eric that readahead or background requests should not occupy expensive and limited cache device space. This is why I don't want to change the behavior for this moment. This doesn't mean this patch is rejected. If, 1) You may explain in which workload caching readahead or background request are good for performance. 2) Better performance numbers can be shared It would be my pleasure to review this patch. Otherwise I'd like to avoid extra bypass options. Thanks. Coly Li