From: Dave Chinner <dchinner@xxxxxxxxxx> We currently don't initialise the inode->i_gclist member because it it not necessary for a pure llist_add/llist_del_all producer- consumer usage pattern. However, for lazy removal from the inodegc list, we need to be able to determine if the inode is already on an inodegc list before we queue it. We can do this detection by using llist_on_list(), but this requires that we initialise the llist_node before we use it, and we re-initialise it when we remove it from the llist. Because we already serialise the inodegc list add with inode state changes under the ip->i_flags_lock, we can do the initialisation on list removal atomically with the state change. We can also do the check of whether the inode is already on a inodegc list inside the state change region on insert. This gives us the ability to use llist_on_list(ip->i_gclist) to determine if the inode needs to be queued for inactivation without having to depend on inode state flags. Signed-off-by: Dave Chinner <dchinner@xxxxxxxxxx> Reviewed-by: Darrick J. Wong <djwong@xxxxxxxxxx> --- fs/xfs/xfs_icache.c | 19 +++++++++++++++++-- 1 file changed, 17 insertions(+), 2 deletions(-) diff --git a/fs/xfs/xfs_icache.c b/fs/xfs/xfs_icache.c index 9a362964f656..559b8f71dc91 100644 --- a/fs/xfs/xfs_icache.c +++ b/fs/xfs/xfs_icache.c @@ -113,6 +113,7 @@ xfs_inode_alloc( spin_lock_init(&ip->i_ioend_lock); ip->i_next_unlinked = NULLAGINO; ip->i_prev_unlinked = 0; + init_llist_node(&ip->i_gclist); return ip; } @@ -1880,8 +1881,14 @@ xfs_inodegc_worker( llist_for_each_entry_safe(ip, n, node, i_gclist) { int error; - /* Switch state to inactivating. */ + /* + * Switch state to inactivating and remove the inode from the + * gclist. This allows the use of llist_on_list() in the queuing + * code to determine if the inode is already on an inodegc + * queue. + */ spin_lock(&ip->i_flags_lock); + init_llist_node(&ip->i_gclist); ip->i_flags |= XFS_INACTIVATING; ip->i_flags &= ~XFS_NEED_INACTIVE; spin_unlock(&ip->i_flags_lock); @@ -2082,13 +2089,21 @@ xfs_inodegc_queue( trace_xfs_inode_set_need_inactive(ip); /* - * Put the addition of the inode to the gc list under the + * The addition of the inode to the gc list is done under the * ip->i_flags_lock so that the state change and list addition are * atomic w.r.t. lookup operations under the ip->i_flags_lock. + * The removal is also done under the ip->i_flags_lock and so this + * allows us to safely use llist_on_list() here to determine if the + * inode is already queued on an inactivation queue. */ spin_lock(&ip->i_flags_lock); ip->i_flags |= XFS_NEED_INACTIVE; + if (llist_on_list(&ip->i_gclist)) { + spin_unlock(&ip->i_flags_lock); + return; + } + cpu_nr = get_cpu(); gc = this_cpu_ptr(mp->m_inodegc); llist_add(&ip->i_gclist, &gc->list); -- 2.43.0