On Tue, Jun 08, 2021 at 08:32:04AM -0700, Darrick J. Wong wrote: > On Tue, Jun 08, 2021 at 07:48:05AM -0400, Brian Foster wrote: > > users/workloads that might operate under these conditions? I guess > > historically we've always recommended to not consistently operate in > > <20% free space conditions, so to some degree there is an expectation > > for less than optimal behavior if one decides to constantly bash an fs > > into ENOSPC. Then again with large enough files, will/can we put the > > filesystem into that state ourselves without any indication to the user? > > > > I kind of wonder if unless/until there's some kind of efficient feedback > > between allocation and "pending" free space, whether deferred > > inactivation should be an optimization tied to some kind of heuristic > > that balances the amount of currently available free space against > > pending free space (but I've not combed through the code enough to grok > > whether this already does something like that). > > Ooh! You mentioned "efficient feedback", and one sprung immediately to > mind -- if the AG is near full (or above 80% full, or whatever) we > schedule the per-AG inodegc worker immediately instead of delaying it. That's what the lowspace thresholds in speculative preallocation are for... 20% of a 1TB AG is an awful lot of freespace still remaining, and if someone is asking for a 200GB fallocate(), they are always going to get some fragmentation on a used, 80% full filesystem regardless of deferred inode inactivation. IMO, if you're going to do this, use the same thresholds we already use to limit preallocation near global ENOSPC and graduate it to be more severe the closer we get to global ENOSPC... Cheers, Dave. -- Dave Chinner david@xxxxxxxxxxxxx