On Sat, Oct 26, 2019 at 11:54:02AM +0200, Gionatan Danti wrote: > Il 26-10-2019 01:39 Dave Chinner ha scritto: > > Again, it's a trade-off. > > > > 256kB iclogs mean that a crash can leave an unrecoverable 2MB hole > > in the journal, while 32kB iclogs means it's only 256kB. > > Sure, but a crash will always cause the loss of unsynced data, especially > when using deferred logging and/or deferred allocation, right? Yes, but there's a big difference between 2MB and 256KB, especially if it's a small filesystem (very common) and the log is only ~10MB in size. > > 256kB iclogs mean 2MB of memory usage per filesystem, 32kB is only > > 256kB. We have users with hundreds of individual XFS filesystems > > mounted on single machines, and so 256kB iclogs is a lot of wasted > > memory... > > Just wondering: 1000 filesystems with 256k logbsize would result in 2 GB of > memory consumed by journal buffers. Is this considered too much memory for a > system managing 1000 filesystems? The pagecache write back memory > consumption on these systems (probably equipped with 10s GB of RAM) would > dwarfs any journal buffers, no? Log buffers are static memory footprint. Page cache memory is dynamic and can be trimmed to nothing when there is memory pressure However, memory allocated to log buffers is pinned for the life of the mount, whether that filesystem is busy or not - the memory is not reclaimable. THe 8 log buffers of 32kB each is a good trade-off between minimising memory footprint and maintaining performance over a wide range of storage and use cases. If that's still too much memory per filesystem, then the user can compromise on performance by reducing the number of logbufs. If performance is too slow, then the user can increase the memory footprint to improve performance. The default values sit in the middle ground on both axis - enough logbufs and iclog size for decent performance but with a small enough memory footprint that dense or resource constrained installations are possible to deploy without needing any tweaking. Cheers, Dave. -- Dave Chinner david@xxxxxxxxxxxxx