Hello and sorry for the noise if this list is fully intended for contribution or patch purpose only. We use bcache quite a lot on our infrastructure, and quite happily so far. We recently noticed a strange behavior in the way bcache reports amount of dirty data and the related available cache percentage used. I opened a related "bug" [0] but I will do a quick TL;DR here * bcache is in writeback mode, running, with one cache device, one backing device, writeback_percent set to 40 * bcache congested_{read,write}_threshold_us are set to 0 * writeback_rate_debug shows 148Gb of dirty data, priority_stats shows 70% of dirty data in the cache, the cache device is 1.6 TB (and the cache size related to that, given the nbbucket and bucket_size), so one of the metric is lying. Because we're at 70%, I believe we bypass the writeback completely because we reach CUTOFF_WRITEBACK_SYNC [1]. * As a result, on an I/O intensive throughput server, we have high I/O latency (=~ 1 sec) for both the cache device and the backing device (although I don't explain why we have this latency on the cache device as well. The graphs of both devices are pretty much aligned). * when the GC is triggered (manually or automatically), the writeback is restored for a short period of time (10-15 minutes) and the I/O latency drops. Until we reach the 70% of dirty data mark again * we seems to have this discrepancy of metric everywhere but because the default writeback_percent is at 10%, we never really reach the 70% threshold as displayed in priority_stats Again sorry if this was the wrong forum. Regards, [0]: https://bugzilla.kernel.org/show_bug.cgi?id=206767 [1]: https://github.com/torvalds/linux/blob/v4.15/drivers/md/bcache/writeback.h line 69