On Thu, Apr 20, 2017 at 09:11:22AM +0200, Michael Weissenbacher wrote: > On 20.04.2017 01:48, Dave Chinner wrote: > > On Wed, Apr 19, 2017 at 11:47:43PM +0200, Michael Weissenbacher wrote: > >> OK, do i understand you correctly that the xfsaild does all the actual > >> work of updating the inodes? And it's doing that single-threaded, > >> reading both the log and the indoes themselves? > > > > The problem is that the backing buffers that are used for flushing > > inodes have been reclaimed due to memory pressure, but the inodes in > > cache are still dirty. Hence to write the dirty inodes, we first > > have to read the inode buffer back into memory. > > > Interesting find. Is there a way to prevent those buffers from getting > reclaimed? Not really. It's simply a side effect of memory reclaim not being able to reclaim inodes or the page cache because they are dirty, and so it goes and puts lots more pressure on clean caches. The working set in those other caches gets trashed, and this it's a downward spiral because it means dirty inodes and pages take longer are require blocking IO to refill on demand... > Would adjusting vfs_cache_pressure help? Unlikely. > Or adding more > memory to the system? Unlikely - that'll just lead to bigger stalls. > In fact the best thing would be to disable file > content caching completely. Because of the use-case (backup server) it's > worthless to cache file content. > My primary objective is to avoid those stalls and reduce latency, at the > expense of throughput. Set up dirty page cache writeback thresholds to be low (a couple of hundred MB instead of 10/20% of memory) so that data writeback starts early and throttles dirty pages to a small amount of memory. This will help keep the page cache clean and immediately reclaimable, hence it shouldn't put as much pressure on other caches when memory reclaim is required. Cheers, Dave. -- Dave Chinner david@xxxxxxxxxxxxx -- To unsubscribe from this list: send the line "unsubscribe linux-xfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html