Re: [PATCH 6/6] xfs: rename "alloc_set_aside" to be more descriptive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Mar 17, 2022 at 02:21:29PM -0700, Darrick J. Wong wrote:
> From: Darrick J. Wong <djwong@xxxxxxxxxx>
> 
> We've established in this patchset that the "alloc_set_aside" pool is
> actually used to ensure that a bmbt split always succeeds so that the
> filesystem won't run out of space mid-transaction and crash.  Rename the
> variable and the function to be a little more suggestive of the purpose
> of this quantity.
> 
> Signed-off-by: Darrick J. Wong <djwong@xxxxxxxxxx>
> ---
>  fs/xfs/libxfs/xfs_alloc.c |    4 ++--
>  fs/xfs/libxfs/xfs_alloc.h |    2 +-
>  fs/xfs/xfs_fsops.c        |    2 +-
>  fs/xfs/xfs_log_recover.c  |    2 +-
>  fs/xfs/xfs_mount.c        |    4 ++--
>  fs/xfs/xfs_mount.h        |    7 ++++---
>  6 files changed, 11 insertions(+), 10 deletions(-)
> 
> 
...
> diff --git a/fs/xfs/xfs_mount.c b/fs/xfs/xfs_mount.c
> index 9336176dc706..eac9534338fd 100644
> --- a/fs/xfs/xfs_mount.c
> +++ b/fs/xfs/xfs_mount.c
> @@ -656,7 +656,7 @@ xfs_mountfs(
>  	 * Compute the amount of space to set aside to handle btree splits now
>  	 * that we have calculated the btree maxlevels.
>  	 */
> -	mp->m_alloc_set_aside = xfs_alloc_set_aside(mp);
> +	mp->m_bmbt_split_setaside = xfs_bmbt_split_setaside(mp);
>  	mp->m_ag_max_usable = xfs_alloc_ag_max_usable(mp);
>  
>  	/*
> @@ -1153,7 +1153,7 @@ xfs_mod_fdblocks(
>  	 * problems (i.e. transaction abort, pagecache discards, etc.) than
>  	 * slightly premature -ENOSPC.
>  	 */
> -	set_aside = mp->m_alloc_set_aside + atomic64_read(&mp->m_allocbt_blks);
> +	set_aside = mp->m_bmbt_split_setaside + atomic64_read(&mp->m_allocbt_blks);

IMO the whole end result would be more simple if the helper(s) were
written to fully isolate usage of >m_bmbt_split_setaside to within those
helpers, as opposed to continuing to leave around the need to open code
the set aside calculation anywhere. That said, this is more commentary
on the previous couple or so patches than this one and this is an
improvement overall:

Reviewed-by: Brian Foster <bfoster@xxxxxxxxxx>

>  	percpu_counter_add_batch(&mp->m_fdblocks, delta, batch);
>  	if (__percpu_counter_compare(&mp->m_fdblocks, set_aside,
>  				     XFS_FDBLOCKS_BATCH) >= 0) {
> diff --git a/fs/xfs/xfs_mount.h b/fs/xfs/xfs_mount.h
> index 74e9b8558162..6c4cbd4a0c32 100644
> --- a/fs/xfs/xfs_mount.h
> +++ b/fs/xfs/xfs_mount.h
> @@ -134,7 +134,8 @@ typedef struct xfs_mount {
>  	uint			m_refc_maxlevels; /* max refcount btree level */
>  	unsigned int		m_agbtree_maxlevels; /* max level of all AG btrees */
>  	xfs_extlen_t		m_ag_prealloc_blocks; /* reserved ag blocks */
> -	uint			m_alloc_set_aside; /* space we can't use */
> +	/* space reserved to ensure bmbt splits always succeed */
> +	unsigned int		m_bmbt_split_setaside;
>  	uint			m_ag_max_usable; /* max space per AG */
>  	int			m_dalign;	/* stripe unit */
>  	int			m_swidth;	/* stripe width */
> @@ -503,7 +504,7 @@ xfs_fdblocks_available(
>  {
>  	int64_t			free = percpu_counter_sum(&mp->m_fdblocks);
>  
> -	free -= mp->m_alloc_set_aside;
> +	free -= mp->m_bmbt_split_setaside;
>  	free -= atomic64_read(&mp->m_allocbt_blks);
>  	return free;
>  }
> @@ -516,7 +517,7 @@ xfs_fdblocks_available_fast(
>  	int64_t			free;
>  
>  	free = percpu_counter_read_positive(&mp->m_fdblocks);
> -	free -= mp->m_alloc_set_aside;
> +	free -= mp->m_bmbt_split_setaside;
>  	free -= atomic64_read(&mp->m_allocbt_blks);
>  	return free;
>  }
> 




[Index of Archives]     [XFS Filesystem Development (older mail)]     [Linux Filesystem Development]     [Linux Audio Users]     [Yosemite Trails]     [Linux Kernel]     [Linux RAID]     [Linux SCSI]


  Powered by Linux