Thanks for your reply. as far as I can tell bcache is caching BtrFS metadata for me, with or without my patch. Perhaps BtrFS correctly flags it as REQ_PRIO. In my testing bcache was consistently (and repeatedly) bypassing file contents just because they were read as a result of a readahead operation, hence the need for my patch. I also found that, before my patch, sequential_cutoff and congestion thresholds had close to no effect on how much would be cached or bypassed. The problem you linked to seems to be a separate issue where different filesystems flag their internal IO inconsistently leading to similar yet different problems. I did not know what REQ_BACKGROUND did or how it's handled so I made a seperate switch for it just to be on the safer side. I did not intend my patch to be a thorough solution to all problems currently ongoing with the hardcoded IO flag based bypass, just a quick fix to restore previous performance. Neither do I intend to develop such a solution as I feel like my knowledge of bcache and the Linux kernel in general is far too limited to produce a patch of sufficient quality. Am 2019-02-16 um 0003 schrieb Nix: > On 15 Feb 2019, Andreas said: > >> I created a patch to make the bypasses for readahead and background IO >> that were added in late 2017 configurable via SysFS switches. Since >> receiving that original patch in my distro's kernel I noticed >> performance degradation and found a few people asking about similar >> symptoms as I noticed online who weren't able to identify the problem. > Hm. > >> diff --git a/drivers/md/bcache/request.c b/drivers/md/bcache/request.c >> index 15070412a32e..8028638b348e 100644 >> --- a/drivers/md/bcache/request.c >> +++ b/drivers/md/bcache/request.c >> @@ -394,9 +394,13 @@ static bool check_should_bypass(struct cached_dev *dc, struct bio *bio) >> * Flag for bypass if the IO is for read-ahead or background, >> * unless the read-ahead request is for metadata (eg, for gfs2). >> */ >> - if (bio->bi_opf & (REQ_RAHEAD|REQ_BACKGROUND) && >> - !(bio->bi_opf & REQ_PRIO)) >> - goto skip; >> + if (!(bio->bi_opf & REQ_PRIO)) >> + { >> + if (dc->bypass_readahead_io && (bio->bi_opf & REQ_RAHEAD)) >> + goto skip; >> + if (dc->bypass_background_io && (bio->bi_opf & REQ_BACKGROUND)) >> + goto skip; >> + } > The thing you based this on is buggy: so, as a result, your patch is > buggy too. You want to apply this atop the patch in > <https://lkml.org/lkml/2019/2/7/77>, I think: without it, metadata I/O > will often not be cached at all. I also suspect this is the cause of > your performance degradation. > > FYI, REQ_BACKGROUND is only used for writeback that I can see, so I'm > not sure that bit of the patch does anything at all, given that > as I understand it bypassing relates solely to reads. >