Re: [PATCH] xfs: fix race condition in inodegc list and cpumask handling

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Nov 26, 2024 at 08:03:43AM +1100, Dave Chinner wrote:

Sorry for reply so late, because I want to make the problem as clear
as possible, but there are still some doubts.

> On Mon, Nov 25, 2024 at 09:52:58AM +0800, Long Li wrote:
> > There is a race condition between inodegc queue and inodegc worker where
> > the cpumask bit may not be set when concurrent operations occur.
> 
> What problems does this cause? i.e. how do we identify systems
> hitting this issue?
> 

I haven't encountered any actual issues, but while reviewing 62334fab4762
("xfs: use per-mount cpumask to track nonempty percpu inodegc lists"), I
noticed there is a potential problem.

When the gc worker runs on a CPU other than the specified one due to
loadbalancing, it could race with xfs_inodegc_queue() processing the
same struct xfs_inodegc. If xfs_inodegc_queue() adds the last inode
to the gc list during this race, that inode might never be processed
and reclaimed due to cpumask not set. This maybe lead to memory leaks
after filesystem unmount, I'm unsure if there are other more serious
implications.

> > 
> > Current problematic sequence:
> > 
> >   CPU0                             CPU1
> >   --------------------             ---------------------
> >   xfs_inodegc_queue()              xfs_inodegc_worker()
> >                                      llist_del_all(&gc->list)
> >     llist_add(&ip->i_gclist, &gc->list)
> >     cpumask_test_and_set_cpu()
> >                                      cpumask_clear_cpu()
> >                   < cpumask not set >
> > 
> > Fix this by moving llist_del_all() after cpumask_clear_cpu() to ensure
> > proper ordering. This change ensures that when the worker thread clears
> > the cpumask, any concurrent queue operations will either properly set
> > the cpumask bit or have already emptied the list.
> > 
> > Also remove unnecessary smp_mb__{before/after}_atomic() barriers since
> > the llist_* operations already provide required ordering semantics. it
> > make the code cleaner.
> 
> IIRC, the barriers were for ordering the cpumask bitmap ops against
> llist operations. There are calls elsewhere to for_each_cpu() that
> then use llist_empty() checks (e.g xfs_inodegc_queue_all/wait_all),
> so on relaxed architectures (like alpha) I think we have to ensure
> the bitmask ops carried full ordering against the independent llist
> ops themselves. i.e. llist_empty() just uses READ_ONCE, so it only
> orders against other llist ops and won't guarantee any specific
> ordering against against cpumask modifications.
> 
> I could be remembering incorrectly, but I think that was the
> original reason for the barriers. Can you please confirm that the
> cpumask iteration/llist_empty checks do not need these bitmask
> barriers anymore? If that's ok, then the change looks fine.
> 

Even on architectures with relaxed memory ordering (like alpha), I noticed
that llist_add() already has full barrier semantics, so I think the 
smp_mb__before_atomic barrier in xfs_inodegc_queue() can be removed.

  llist_add()
    try_cmpxchg
      raw_try_cmpxchg
        arch_cmpxchg
  
  arch_cmpxchg of alpha in file arch/alpha/include/asm/cmpxchg.h
  
  #define arch_cmpxchg(ptr, o, n)                                         \
  ({                                                                      \
          __typeof__(*(ptr)) __ret;                                       \
          __typeof__(*(ptr)) _o_ = (o);                                   \
          __typeof__(*(ptr)) _n_ = (n);                                   \
          smp_mb();                                                       \
          __ret = (__typeof__(*(ptr))) ____cmpxchg((ptr),                 \
                  (unsigned long)_o_, (unsigned long)_n_, sizeof(*(ptr)));\
          smp_mb();                                                       \
	  ^^^^^^^
          __ret;                                                          \
  })

I'm wondering if we really need to "Ensure the list add is always seen
by xfs_inodegc_queue_all() who finds the cpumask bit set". It seems
harmless even if we don't see the list add completion - it can be
processed in the next round. xfs_inodegc_queue_all() doesn't guarantee
processing all inodes that are being or will be added to the gc llist
anyway.

If we do need that guarantee, should we add a barrier between reading
m_inodegc_cpumask and gc->list in xfs_inodegc_queue_all() to prevent 
load-load reordering?

Maybe I'm misunderstanding something.

Thanks,
Long Li




[Index of Archives]     [XFS Filesystem Development (older mail)]     [Linux Filesystem Development]     [Linux Audio Users]     [Yosemite Trails]     [Linux Kernel]     [Linux RAID]     [Linux SCSI]


  Powered by Linux