Re: [PATCH, post-03/20 1/1] xfs: hook up inodegc to CPU dead notification

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Aug 04, 2021 at 09:19:18AM -0700, Darrick J. Wong wrote:
> On Wed, Aug 04, 2021 at 09:52:25PM +1000, Dave Chinner wrote:
> > 
> > From: Dave Chinner <dchinner@xxxxxxxxxx>
> > 
> > So we don't leave queued inodes on a CPU we won't ever flush.
> > 
> > Signed-off-by: Dave Chinner <dchinner@xxxxxxxxxx>
> > ---
> >  fs/xfs/xfs_icache.c | 36 ++++++++++++++++++++++++++++++++++++
> >  fs/xfs/xfs_icache.h |  1 +
> >  fs/xfs/xfs_super.c  |  2 +-
> >  3 files changed, 38 insertions(+), 1 deletion(-)
> > 
> > diff --git a/fs/xfs/xfs_icache.c b/fs/xfs/xfs_icache.c
> > index f772f2a67a8b..9e2c95903c68 100644
> > --- a/fs/xfs/xfs_icache.c
> > +++ b/fs/xfs/xfs_icache.c
> > @@ -1966,6 +1966,42 @@ xfs_inodegc_start(
> >  	}
> >  }
> >  
> > +/*
> > + * Fold the dead CPU inodegc queue into the current CPUs queue.
> > + */
> > +void
> > +xfs_inodegc_cpu_dead(
> > +	struct xfs_mount	*mp,
> > +	int			dead_cpu)
> 
> unsigned int, since that's the caller's type.

*nod*

> > +{
> > +	struct xfs_inodegc	*dead_gc, *gc;
> > +	struct llist_node	*first, *last;
> > +	int			count = 0;
> > +
> > +	dead_gc = per_cpu_ptr(mp->m_inodegc, dead_cpu);
> > +	cancel_work_sync(&dead_gc->work);
> > +
> > +	if (llist_empty(&dead_gc->list))
> > +		return;
> > +
> > +	first = dead_gc->list.first;
> > +	last = first;
> > +	while (last->next) {
> > +		last = last->next;
> > +		count++;
> > +	}
> > +	dead_gc->list.first = NULL;
> > +	dead_gc->items = 0;
> > +
> > +	/* Add pending work to current CPU */
> > +	gc = get_cpu_ptr(mp->m_inodegc);
> > +	llist_add_batch(first, last, &gc->list);
> > +	count += READ_ONCE(gc->items);
> > +	WRITE_ONCE(gc->items, count);
> 
> I was wondering about the READ/WRITE_ONCE pattern for gc->items: it's
> meant to be an accurate count of the list items, right?  But there's no
> hard synchronization (e.g. spinlock) around them, which means that the
> only CPU that can access that variable at all is the one that the percpu
> structure belongs to, right?  And I think that's ok here, because the
> only accessors are _queue() and _worker(), which both are supposed to
> run on the same CPU since they're percpu lists, right?

For items that are per-cpu, we only need to guarantee that the
normal case is access by that CPU only and that dependent accesses
within an algorithm occur within a preempt disabled region. THe use
of get_cpu_ptr()/put_cpu_ptr() creates a critital region where
preemption is disabled on that CPU. Hence we can read, modify and
write a per-cpu variable without locking, knowing that nothing else
will be attempting to run the same modification at the same time on
a different CPU, because they will be accessing percpu stuff local
to that CPU, not this one.

The reason for using READ_ONCE/WRITE_ONCE is largely to ensure that
we fetch and store the variable appropriately, as the work that
zeros the count can sometimes run on a different CPU. And it will
shut up the data race detector thing...

As it is, the count of items is rough, and doesn't need to be
accurate. If we race with a zeroing of the count, we'll set the
count to be higher (as if the zeroing didn't occur) and that just
causes the work to be rescheduled sooner than it otherwise would. A
race with zeroing is not the end of the world...

> In which case: why can't we just say count = dead_gc->items;?  @dead_cpu
> is being offlined, which implies that nothing will get scheduled on it,
> right?

The local CPU might already have items queued, so the count should
include them, too.

> > +	put_cpu_ptr(gc);
> > +	queue_work(mp->m_inodegc_wq, &gc->work);
> 
> Should this be thresholded like we do for _inodegc_queue?

I thought about that, then thought "this is slow path stuff, we just
want to clear out the backlog so we don't care about batching.."

> In the old days I would have imagined that cpu offlining should be rare
> enough <cough> that it probably doesn't make any real difference.  OTOH
> my cloudic colleague reminds me that they aggressively offline cpus to
> reduce licensing cost(!).

Yeah, CPU hotplug is very rare, except in those rare environments
where it is very common....

Cheers,

Dave.
-- 
Dave Chinner
david@xxxxxxxxxxxxx



[Index of Archives]     [XFS Filesystem Development (older mail)]     [Linux Filesystem Development]     [Linux Audio Users]     [Yosemite Trails]     [Linux Kernel]     [Linux RAID]     [Linux SCSI]


  Powered by Linux