On Thu, Oct 14, 2021 at 01:18:00PM -0700, Darrick J. Wong wrote: > From: Darrick J. Wong <djwong@xxxxxxxxxx> > > Compute the actual maximum AG btree height for deciding if a per-AG > block reservation is critically low. This only affects the sanity check > condition, since we /generally/ will trigger on the 10% threshold. This > is a long-winded way of saying that we're removing one more usage of > XFS_BTREE_MAXLEVELS. > > Signed-off-by: Darrick J. Wong <djwong@xxxxxxxxxx> > --- > fs/xfs/libxfs/xfs_ag_resv.c | 3 ++- > fs/xfs/xfs_mount.c | 14 ++++++++++++++ > fs/xfs/xfs_mount.h | 1 + > 3 files changed, 17 insertions(+), 1 deletion(-) One minor nit below, otherwise it looks good. Reviewed-by: Dave Chinner <dchinner@xxxxxxxxxx> > > > diff --git a/fs/xfs/libxfs/xfs_ag_resv.c b/fs/xfs/libxfs/xfs_ag_resv.c > index 2aa2b3484c28..fe94058d4e9e 100644 > --- a/fs/xfs/libxfs/xfs_ag_resv.c > +++ b/fs/xfs/libxfs/xfs_ag_resv.c > @@ -91,7 +91,8 @@ xfs_ag_resv_critical( > trace_xfs_ag_resv_critical(pag, type, avail); > > /* Critically low if less than 10% or max btree height remains. */ > - return XFS_TEST_ERROR(avail < orig / 10 || avail < XFS_BTREE_MAXLEVELS, > + return XFS_TEST_ERROR(avail < orig / 10 || > + avail < pag->pag_mount->m_agbtree_maxlevels, > pag->pag_mount, XFS_ERRTAG_AG_RESV_CRITICAL); > } > > diff --git a/fs/xfs/xfs_mount.c b/fs/xfs/xfs_mount.c > index 06dac09eddbd..5be1dd63fac5 100644 > --- a/fs/xfs/xfs_mount.c > +++ b/fs/xfs/xfs_mount.c > @@ -567,6 +567,18 @@ xfs_mount_setup_inode_geom( > xfs_ialloc_setup_geometry(mp); > } > > +/* Compute maximum possible height for per-AG btree types for this fs. */ > +static inline void > +xfs_agbtree_compute_maxlevels( > + struct xfs_mount *mp) > +{ > + unsigned int ret; > + > + ret = max(mp->m_alloc_maxlevels, M_IGEO(mp)->inobt_maxlevels); > + ret = max(ret, mp->m_rmap_maxlevels); > + mp->m_agbtree_maxlevels = max(ret, mp->m_refc_maxlevels); > +} "ret" should really be named "levels" here because it's not a return value anymore... Cheers, Dave. -- Dave Chinner david@xxxxxxxxxxxxx