Am Mo., 8. Nov. 2021 um 06:38 Uhr schrieb Dongdong Tao <dongdong.tao@xxxxxxxxxxxxx>: > > My understanding is that the bcache doesn't need to wait for the flush > requests to be completed from the backing device in order to finish > the write request, since it used a new bio "flush" for the backing > device. That's probably true for requests going to the writeback cache. But requests that bypass the cache must also pass the flush request to the backing device - otherwise it would violate transactional guarantees. bcache still guarantees the presence of the dirty data when it later replays all dirty data to the backing device (and it can probably reduce flushes here and only flush just before removing the writeback log from its cache). Personally, I've turned writeback caching off due to increasingly high latencies as seen by applications [1]. Writes may be slower throughput-wise but overall latency is lower which "feels" faster. I wonder if maybe a lot of writes with flush requests may bypass the cache... That said, initial releases of bcache felt a lot smoother here. But I'd like to add that I only ever used it for desktop workflows, I never used ceph. Regards, Kai [1]: And some odd behavior where bcache would detach dirty caches on caching device problems, which happens for me sometimes at reboot just after bcache was detected (probably due to a SSD firmware hiccup, the device temporarily goes missing and re-appears) - and then all dirty data is lost and discarded. In consequence, on next reboot, cache mode is set to "none" and the devices need to be re-attached. But until then, dirty data is long gone.