On Thu, Feb 02, 2023 at 10:09:40PM +0100, Anthony Iliopoulos wrote: > This test is failing on filesystems with 64k blocksize since the leaf > hdr.firstused field is 16 bit and as such trying to reset it to $dbsize > overflows and is rejected by xfs_db. The leaf is never properly resetted > and the discrepancy is picked up by xfs_repair, thus failing the test. > > Fix it by setting it to XFS_ATTR3_LEAF_NULLOFF (0) as this is the proper > on-disk value to indicate an empty leaf on 64k blocksized fses. > > Signed-off-by: Anthony Iliopoulos <ailiop@xxxxxxxx> > --- > tests/xfs/191 | 12 +++++++++++- > 1 file changed, 11 insertions(+), 1 deletion(-) > > diff --git a/tests/xfs/191 b/tests/xfs/191 > index 4f0a9b9eeef5..8dd875fcd28b 100755 > --- a/tests/xfs/191 > +++ b/tests/xfs/191 > @@ -78,10 +78,20 @@ make_empty_leaf() { > > base=$(_scratch_xfs_get_metadata_field "hdr.freemap[0].base" "inode $inum" "ablock 0") > > + # 64k dbsize is a special case since it overflows the 16 bit firstused > + # field and it needs to be set to the special XFS_ATTR3_LEAF_NULLOFF (0) > + # value to indicate a null leaf. For more details see kernel commit: > + # e87021a2bc10 ("xfs: use larger in-core attr firstused field and detect overflow"). Do we need a second _fixed_by_kernel_commit here? > + if [ $dbsize -eq 65536 ]; then > + firstused=0; > + else > + firstused=$dbsize; Trailing semicolon not necessary here. > + fi Otherwise, looks correct to me, so Reviewed-by: Darrick J. Wong <djwong@xxxxxxxxxx> --D > + > _scratch_xfs_db -x -c "inode $inum" -c "ablock 0" \ > -c "write -d hdr.count 0" \ > -c "write -d hdr.usedbytes 0" \ > - -c "write -d hdr.firstused $dbsize" \ > + -c "write -d hdr.firstused $firstused" \ > -c "write -d hdr.freemap[0].size $((dbsize - base))" \ > -c print >> $seqres.full > } > -- > 2.35.3 >