On Thu, Dec 05, 2024 at 02:59:16PM +0800, Long Li wrote: > On Tue, Nov 26, 2024 at 08:03:43AM +1100, Dave Chinner wrote: > > Sorry for reply so late, because I want to make the problem as clear > as possible, but there are still some doubts. > > > On Mon, Nov 25, 2024 at 09:52:58AM +0800, Long Li wrote: > > > There is a race condition between inodegc queue and inodegc worker where > > > the cpumask bit may not be set when concurrent operations occur. > > > > What problems does this cause? i.e. how do we identify systems > > hitting this issue? > > > > I haven't encountered any actual issues, but while reviewing 62334fab4762 > ("xfs: use per-mount cpumask to track nonempty percpu inodegc lists"), I > noticed there is a potential problem. > > When the gc worker runs on a CPU other than the specified one due to > loadbalancing, How? inodegc is using get_cpu() to pin the task to the cpu while it processes the queue and then queues the work to be run on that CPU. The per-PCU inodegc queue is then processed using a single CPU affine worker thread. The whole point of this setup is that scheduler load balancing, etc, cannot disturb the cpu affinity of the queues and the worker threads that service them. How does load balancing break explicit CPU affine kernel task scheduling? > it could race with xfs_inodegc_queue() processing the > same struct xfs_inodegc. If xfs_inodegc_queue() adds the last inode > to the gc list during this race, that inode might never be processed > and reclaimed due to cpumask not set. This maybe lead to memory leaks > after filesystem unmount, I'm unsure if there are other more serious > implications. xfs_inodegc_stop() should handle this all just fine. It removes the enabled flag, then moves into a loop that should catch list adds that were in progress when the enabled flag was cleared. > > > Current problematic sequence: > > > > > > CPU0 CPU1 > > > -------------------- --------------------- > > > xfs_inodegc_queue() xfs_inodegc_worker() > > > llist_del_all(&gc->list) > > > llist_add(&ip->i_gclist, &gc->list) > > > cpumask_test_and_set_cpu() > > > cpumask_clear_cpu() > > > < cpumask not set > > > > > > > Fix this by moving llist_del_all() after cpumask_clear_cpu() to ensure > > > proper ordering. This change ensures that when the worker thread clears > > > the cpumask, any concurrent queue operations will either properly set > > > the cpumask bit or have already emptied the list. > > > > > > Also remove unnecessary smp_mb__{before/after}_atomic() barriers since > > > the llist_* operations already provide required ordering semantics. it > > > make the code cleaner. > > > > IIRC, the barriers were for ordering the cpumask bitmap ops against > > llist operations. There are calls elsewhere to for_each_cpu() that > > then use llist_empty() checks (e.g xfs_inodegc_queue_all/wait_all), > > so on relaxed architectures (like alpha) I think we have to ensure > > the bitmask ops carried full ordering against the independent llist > > ops themselves. i.e. llist_empty() just uses READ_ONCE, so it only > > orders against other llist ops and won't guarantee any specific > > ordering against against cpumask modifications. > > > > I could be remembering incorrectly, but I think that was the > > original reason for the barriers. Can you please confirm that the > > cpumask iteration/llist_empty checks do not need these bitmask > > barriers anymore? If that's ok, then the change looks fine. > > > > Even on architectures with relaxed memory ordering (like alpha), I noticed > that llist_add() already has full barrier semantics, so I think the > smp_mb__before_atomic barrier in xfs_inodegc_queue() can be removed. Ok. Seems reasonable to remove it if everything uses full memory barriers for the llist_add() operation. -Dave. -- Dave Chinner david@xxxxxxxxxxxxx