On Tue, Jun 09, 2020 at 07:40:53PM +0800, Gao Xiang wrote: > In production, we found that sometimes xfs_repair phase 5 > rebuilds freespace node block with pointers less than minrecs > and if we trigger xfs_repair again it would report such > the following message: > > bad btree nrecs (39, min=40, max=80) in btbno block 0/7882 > > The background is that xfs_repair starts to rebuild AGFL > after the freespace btree is settled in phase 5 so we may > need to leave necessary room in advance for each btree > leaves in order to avoid freespace btree split and then > result in AGFL rebuild fails. The old mathematics uses > ceil(num_extents / maxrecs) to decide the number of node > blocks. That would be fine without leaving extra space > since minrecs = maxrecs / 2 but if some slack was decreased > from maxrecs, the result would be larger than what is > expected and cause num_recs_pb less than minrecs, i.e: > > num_extents = 79, adj_maxrecs = 80 - 2 (slack) = 78 > > so we'd get > > num_blocks = ceil(79 / 78) = 2, > num_recs_pb = 79 / 2 = 39, which is less than > minrecs = 80 / 2 = 40 > > OTOH, btree bulk loading code behaves in a different way. > As in xfs_btree_bload_level_geometry it wrote > > num_blocks = floor(num_extents / maxrecs) > > which will never go below minrecs. And when it goes > above maxrecs, just increment num_blocks and recalculate > so we can get the reasonable results. > > In the long term, btree bulk loader will replace the current > repair code as well as to resolve AGFL dependency issue. > But we may still want to look for a backportable solution > for stable versions. Hence, use the same logic to avoid the > freespace btree minrecs underflow for now. > > Cc: "Darrick J. Wong" <darrick.wong@xxxxxxxxxx> > Cc: Dave Chinner <dchinner@xxxxxxxxxx> > Cc: Eric Sandeen <sandeen@xxxxxxxxxxx> > Fixes: 9851fd79bfb1 ("repair: AGFL rebuild fails if btree split required") > Signed-off-by: Gao Xiang <hsiangkao@xxxxxxxxxx> > --- > not heavy tested yet.. > > repair/phase5.c | 101 +++++++++++++++++++++--------------------------- > 1 file changed, 45 insertions(+), 56 deletions(-) > > diff --git a/repair/phase5.c b/repair/phase5.c > index abae8a08..997804a5 100644 > --- a/repair/phase5.c > +++ b/repair/phase5.c > @@ -348,11 +348,29 @@ finish_cursor(bt_status_t *curs) > * failure at runtime. Hence leave a couple of records slack space in > * each block to allow immediate modification of the tree without > * requiring splits to be done. > - * > - * XXX(hch): any reason we don't just look at mp->m_alloc_mxr? > */ > -#define XR_ALLOC_BLOCK_MAXRECS(mp, level) \ > - (libxfs_allocbt_maxrecs((mp), (mp)->m_sb.sb_blocksize, (level) == 0) - 2) > +static void > +compute_level_geometry(xfs_mount_t *mp, bt_stat_level_t *lptr, > + uint64_t nr_this_level, bool leaf) > +{ > + unsigned int maxrecs = mp->m_alloc_mxr[!leaf]; > + int slack = leaf ? 2 : 0; > + unsigned int desired_npb; > + > + desired_npb = max(mp->m_alloc_mnr[!leaf], maxrecs - slack); > + lptr->num_recs_tot = nr_this_level; > + lptr->num_blocks = max(1ULL, nr_this_level / desired_npb); > + > + lptr->num_recs_pb = nr_this_level / lptr->num_blocks; > + lptr->modulo = nr_this_level % lptr->num_blocks; > + if (lptr->num_recs_pb > maxrecs || (lptr->num_recs_pb == maxrecs && > + lptr->modulo)) { > + lptr->num_blocks++; > + > + lptr->num_recs_pb = nr_this_level / lptr->num_blocks; > + lptr->modulo = nr_this_level % lptr->num_blocks; > + } > +} side note: alternatively, maybe we could also adjust (by decreasing) num_blocks and recalculate for the original approach. Although for both ways we could not make 2 extra leaves room for the above 79 of 80 case...