On Tue, Jun 16, 2020 at 10:57:43AM -0400, Marc Smith wrote: > This certainly helps me allow more dirty data than what the defaults > are set to. I only have production experience with slightly older kernels (4.15) and ~40GB partition of an Intel DC SATA SSD (XFS fs). Average latency of the bcache device improved a lot with _reduced_ writeback_percent. I guess dirty block bookkeeping adds its own I/O. Currently I run them even at writeback_percent=1. Not exactly answering your question, though :-) Matthias But a couple other followup questions: > - Any additional recommended tuning/settings for small cache devices? > - Is the soft threshold for dirty writeback data 70% so there is > always room for metadata on the cache device? Dangerous to try and > recompile with larger maximums? > - I'm still studying the code, but so far I don't see this, and wanted > to confirm that: The writeback thread doesn't look at congestion on > the backing device when flushing out data (and say pausing the > writeback thread as needed)? For spinning media, if lots of latency > sensitive reads are going directly to the backing device, and we're > flushing a lot of data from cache to backing, that hurts. > > > Thanks, > > Marc