On Mon, Feb 14, 2022 at 05:54:16PM -0800, Darrick J. Wong wrote: > On Fri, Feb 11, 2022 at 10:08:06AM +1100, Dave Chinner wrote: > > On Thu, Feb 10, 2022 at 02:03:23PM -0500, Brian Foster wrote: > > > On Wed, Feb 02, 2022 at 01:22:40PM +1100, Dave Chinner wrote: > > > > On Mon, Jan 24, 2022 at 11:57:12AM -0500, Brian Foster wrote: > > > That said, why not conditionally tag and divert to a background worker > > > when the inodegc is disabled? That could allow NEEDS_INACTIVE inodes to > > > be claimed/recycled from other contexts in scenarios like when the fs is > > > frozen, since they won't be stuck in inaccessible and inactive percpu > > > queues, but otherwise preserves current behavior in normal runtime > > > conditions. Darrick mentioned online repair wanting to do something > > > similar earlier, but it's not clear to me if scrub could or would want > > > to disable the percpu inodegc workers in favor of a temporary/background > > > mode while repair is running. I'm just guessing that performance is > > > probably small enough of a concern in that situation that it wouldn't be > > > a mitigating factor. Hm? > > > > WE probably could do this, but I'm not sure the complexity is > > justified by the rarity of the problem it is trying to avoid. > > Freezes are not long term, nor are they particularly common for > > performance sensitive workloads. Hence I'm just not this corner case > > is important enough to justify doing the work given that we've had > > similar freeze-will-delay-some-stuff-indefinitely behaviour for a > > long time... > > We /do/ have a few complaints lodged about hangcheck warnings when the > filesystem has to be frozen for a very long time. It'd be nice to > unblock the callers that want to grab a still-reachable inode, though. I suspect this problem largely goes away with moving inactivation up to the VFS level - we'll still block the background EOF block trimming work on a freeze, but it won't prevent lookups on those inodes from taking new references to the inode... Cheers, Dave. -- Dave Chinner david@xxxxxxxxxxxxx