I am currently using bcache in a configuration where 1 cache SSD is caching IO for 3 backing HDDs. The cache mode is writeback and I have configured bcache to disable seq cutoff and congestion tuning so all write IO goes to the cache device: echo writeback > /sys/block/bcache0/cache_mode echo 0 > /sys/block/bcache0/bcache/sequential_cutoff echo 0 > /sys/fs/bcache/<cache set>/congested_read_threshold_us echo 0 > /sys/fs/bcache/<cache set>/congested_write_threshold_us I have increased the minimum writeback rate so that the available space for the dirty data never goes to 100% echo 16384 > /sys/block/bcache0/bcache/writeback_rate_minimum I have noticed that during times of higher throughput the performance of fsynced writes (and other high IOPs workloads as well) seems to suffer a disproportionate amount. The throughput is something that the ssd should be able to handle easily and it still has enough cache to buffer any writes there so I am confused as to why performance would suffer to such a high degree. Are there any internal mechanisms that perhaps measure latency to the backing devices and then throttle IO? If so how would I go about tuning those? Regards, Benard