On Thu, Oct 14, 2021 at 01:18:11PM -0700, Darrick J. Wong wrote: > From: Darrick J. Wong <djwong@xxxxxxxxxx> > > Instead of assuming that the hardcoded XFS_BTREE_MAXLEVELS value is big > enough to handle the maximally tall rmap btree when all blocks are in > use and maximally shared, let's compute the maximum height assuming the > rmapbt consumes as many blocks as possible. > > Signed-off-by: Darrick J. Wong <djwong@xxxxxxxxxx> > Reviewed-by: Chandan Babu R <chandan.babu@xxxxxxxxxx> > --- > fs/xfs/libxfs/xfs_btree.c | 33 +++++++++++++++++++++++++++++ > fs/xfs/libxfs/xfs_btree.h | 2 ++ > fs/xfs/libxfs/xfs_rmap_btree.c | 45 +++++++++++++++++++++++---------------- > fs/xfs/libxfs/xfs_trans_resv.c | 16 ++++++++++++++ > fs/xfs/libxfs/xfs_trans_space.h | 7 ++++++ > 5 files changed, 85 insertions(+), 18 deletions(-) Looks good. Reviewed-by: Dave Chinner <dchinner@xxxxxxxxxx> > /* Calculate the refcount btree size for some records. */ > diff --git a/fs/xfs/libxfs/xfs_trans_resv.c b/fs/xfs/libxfs/xfs_trans_resv.c > index c879e7754ee6..6f83d9b306ee 100644 > --- a/fs/xfs/libxfs/xfs_trans_resv.c > +++ b/fs/xfs/libxfs/xfs_trans_resv.c > @@ -814,6 +814,19 @@ xfs_trans_resv_calc( > struct xfs_mount *mp, > struct xfs_trans_resv *resp) > { > + unsigned int rmap_maxlevels = mp->m_rmap_maxlevels; > + > + /* > + * In the early days of rmap+reflink, we always set the rmap maxlevels > + * to 9 even if the AG was small enough that it would never grow to > + * that height. Transaction reservation sizes influence the minimum > + * log size calculation, which influences the size of the log that mkfs > + * creates. Use the old value here to ensure that newly formatted > + * small filesystems will mount on older kernels. > + */ > + if (xfs_has_rmapbt(mp) && xfs_has_reflink(mp)) > + mp->m_rmap_maxlevels = XFS_OLD_REFLINK_RMAP_MAXLEVELS; > + As an aside, what are your plans to get your "legacy minimum log size reservations" calculation patch moved upstream so we can stop having to care about this in future? Cheers, Dave. -- Dave Chinner david@xxxxxxxxxxxxx