On Thu, Feb 01, 2024 at 11:30:14AM +1100, Dave Chinner wrote: > From: Dave Chinner <dchinner@xxxxxxxxxx> > > We currently don't initialise the inode->i_gclist member because it > it not necessary for a pure llist_add/llist_del_all producer- > consumer usage pattern. However, for lazy removal from the inodegc > list, we need to be able to determine if the inode is already on an > inodegc list before we queue it. > > We can do this detection by using llist_on_list(), but this requires > that we initialise the llist_node before we use it, and we > re-initialise it when we remove it from the llist. > > Because we already serialise the inodegc list add with inode state > changes under the ip->i_flags_lock, we can do the initialisation on > list removal atomically with the state change. We can also do the > check of whether the inode is already on a inodegc list inside the > state change region on insert. > > This gives us the ability to use llist_on_list(ip->i_gclist) to > determine if the inode needs to be queued for inactivation without > having to depend on inode state flags. > > Signed-off-by: Dave Chinner <dchinner@xxxxxxxxxx> > --- > fs/xfs/xfs_icache.c | 21 ++++++++++++++++++--- > 1 file changed, 18 insertions(+), 3 deletions(-) > > diff --git a/fs/xfs/xfs_icache.c b/fs/xfs/xfs_icache.c > index 425b55526386..2dd1559aade2 100644 > --- a/fs/xfs/xfs_icache.c > +++ b/fs/xfs/xfs_icache.c > @@ -114,6 +114,7 @@ xfs_inode_alloc( > spin_lock_init(&ip->i_ioend_lock); > ip->i_next_unlinked = NULLAGINO; > ip->i_prev_unlinked = 0; > + init_llist_node(&ip->i_gclist); > > return ip; > } > @@ -1875,10 +1876,16 @@ xfs_inodegc_worker( > llist_for_each_entry_safe(ip, n, node, i_gclist) { > int error; > > - /* Switch state to inactivating. */ > + /* > + * Switch state to inactivating and remove the inode from the > + * gclist. This allows the use of llist_on_list() in the queuing > + * code to determine if the inode is already on an inodegc > + * queue. > + */ > spin_lock(&ip->i_flags_lock); > ip->i_flags |= XFS_INACTIVATING; > ip->i_flags &= ~XFS_NEED_INACTIVE; > + init_llist_node(&ip->i_gclist); > spin_unlock(&ip->i_flags_lock); > > error = xfs_inodegc_inactivate(ip); > @@ -2075,11 +2082,20 @@ xfs_inodegc_queue( > trace_xfs_inode_set_need_inactive(ip); > > /* > - * Put the addition of the inode to the gc list under the > + * The addition of the inode to the gc list is done under the > * ip->i_flags_lock so that the state change and list addition are > * atomic w.r.t. lookup operations under the ip->i_flags_lock. > + * The removal is also done under the ip->i_flags_lock and so this > + * allows us to safely use llist_on_list() here to determine if the > + * inode is already queued on an inactivation queue. > */ > spin_lock(&ip->i_flags_lock); > + ip->i_flags |= XFS_NEED_INACTIVE; Oh, I see, the flag setting line moves back. Other than that everything makes sense here... Reviewed-by: Darrick J. Wong <djwong@xxxxxxxxxx> --D > + > + if (llist_on_list(&ip->i_gclist)) { > + spin_unlock(&ip->i_flags_lock); > + return; > + } > > cpu_nr = get_cpu(); > gc = this_cpu_ptr(mp->m_inodegc); > @@ -2088,7 +2104,6 @@ xfs_inodegc_queue( > WRITE_ONCE(gc->items, items + 1); > shrinker_hits = READ_ONCE(gc->shrinker_hits); > > - ip->i_flags |= XFS_NEED_INACTIVE; > spin_unlock(&ip->i_flags_lock); > > /* > -- > 2.43.0 > >