Re: [PATCH, post-03/20 1/1] xfs: hook up inodegc to CPU dead notification

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Aug 04, 2021 at 09:52:25PM +1000, Dave Chinner wrote:
> 
> From: Dave Chinner <dchinner@xxxxxxxxxx>
> 
> So we don't leave queued inodes on a CPU we won't ever flush.
> 
> Signed-off-by: Dave Chinner <dchinner@xxxxxxxxxx>
> ---
>  fs/xfs/xfs_icache.c | 36 ++++++++++++++++++++++++++++++++++++
>  fs/xfs/xfs_icache.h |  1 +
>  fs/xfs/xfs_super.c  |  2 +-
>  3 files changed, 38 insertions(+), 1 deletion(-)
> 
> diff --git a/fs/xfs/xfs_icache.c b/fs/xfs/xfs_icache.c
> index f772f2a67a8b..9e2c95903c68 100644
> --- a/fs/xfs/xfs_icache.c
> +++ b/fs/xfs/xfs_icache.c
> @@ -1966,6 +1966,42 @@ xfs_inodegc_start(
>  	}
>  }
>  
> +/*
> + * Fold the dead CPU inodegc queue into the current CPUs queue.
> + */
> +void
> +xfs_inodegc_cpu_dead(
> +	struct xfs_mount	*mp,
> +	int			dead_cpu)

unsigned int, since that's the caller's type.

> +{
> +	struct xfs_inodegc	*dead_gc, *gc;
> +	struct llist_node	*first, *last;
> +	int			count = 0;
> +
> +	dead_gc = per_cpu_ptr(mp->m_inodegc, dead_cpu);
> +	cancel_work_sync(&dead_gc->work);
> +
> +	if (llist_empty(&dead_gc->list))
> +		return;
> +
> +	first = dead_gc->list.first;
> +	last = first;
> +	while (last->next) {
> +		last = last->next;
> +		count++;
> +	}
> +	dead_gc->list.first = NULL;
> +	dead_gc->items = 0;
> +
> +	/* Add pending work to current CPU */
> +	gc = get_cpu_ptr(mp->m_inodegc);
> +	llist_add_batch(first, last, &gc->list);
> +	count += READ_ONCE(gc->items);
> +	WRITE_ONCE(gc->items, count);

I was wondering about the READ/WRITE_ONCE pattern for gc->items: it's
meant to be an accurate count of the list items, right?  But there's no
hard synchronization (e.g. spinlock) around them, which means that the
only CPU that can access that variable at all is the one that the percpu
structure belongs to, right?  And I think that's ok here, because the
only accessors are _queue() and _worker(), which both are supposed to
run on the same CPU since they're percpu lists, right?

In which case: why can't we just say count = dead_gc->items;?  @dead_cpu
is being offlined, which implies that nothing will get scheduled on it,
right?

> +	put_cpu_ptr(gc);
> +	queue_work(mp->m_inodegc_wq, &gc->work);

Should this be thresholded like we do for _inodegc_queue?

In the old days I would have imagined that cpu offlining should be rare
enough <cough> that it probably doesn't make any real difference.  OTOH
my cloudic colleague reminds me that they aggressively offline cpus to
reduce licensing cost(!).

--D

> +}
> +
>  #ifdef CONFIG_XFS_RT
>  static inline bool
>  xfs_inodegc_want_queue_rt_file(
> diff --git a/fs/xfs/xfs_icache.h b/fs/xfs/xfs_icache.h
> index bdf2a8d3fdd5..853d5bfc0cfb 100644
> --- a/fs/xfs/xfs_icache.h
> +++ b/fs/xfs/xfs_icache.h
> @@ -79,5 +79,6 @@ void xfs_inodegc_worker(struct work_struct *work);
>  void xfs_inodegc_flush(struct xfs_mount *mp);
>  void xfs_inodegc_stop(struct xfs_mount *mp);
>  void xfs_inodegc_start(struct xfs_mount *mp);
> +void xfs_inodegc_cpu_dead(struct xfs_mount *mp, int cpu);
>  
>  #endif
> diff --git a/fs/xfs/xfs_super.c b/fs/xfs/xfs_super.c
> index c251679e8514..f579ec49eb7a 100644
> --- a/fs/xfs/xfs_super.c
> +++ b/fs/xfs/xfs_super.c
> @@ -2187,7 +2187,7 @@ xfs_cpu_dead(
>  	spin_lock(&xfs_mount_list_lock);
>  	list_for_each_entry_safe(mp, n, &xfs_mount_list, m_mount_list) {
>  		spin_unlock(&xfs_mount_list_lock);
> -		/* xfs_subsys_dead(mp, cpu); */
> +		xfs_inodegc_cpu_dead(mp, cpu);
>  		spin_lock(&xfs_mount_list_lock);
>  	}
>  	spin_unlock(&xfs_mount_list_lock);



[Index of Archives]     [XFS Filesystem Development (older mail)]     [Linux Filesystem Development]     [Linux Audio Users]     [Yosemite Trails]     [Linux Kernel]     [Linux RAID]     [Linux SCSI]


  Powered by Linux