On Sat, May 09, 2020 at 09:29:58AM -0700, Darrick J. Wong wrote: > @@ -140,11 +140,24 @@ _("can't read %s block %u for inode %" PRIu64 "\n"), > if (whichfork == XFS_DATA_FORK && > (nodehdr.magic == XFS_DIR2_LEAFN_MAGIC || > nodehdr.magic == XFS_DIR3_LEAFN_MAGIC)) { > + int bad = 0; > + > if (i != -1) { > do_warn( > _("found non-root LEAFN node in inode %" PRIu64 " bno = %u\n"), > da_cursor->ino, bno); > + bad++; > } > + > + /* corrupt leafn node; rebuild the dir. */ > + if (!bad && > + (bp->b_error == -EFSBADCRC || > + bp->b_error == -EFSCORRUPTED)) { > + do_warn( > +_("corrupt %s LEAFN block %u for inode %" PRIu64 "\n"), > + FORKNAME(whichfork), bno, da_cursor->ino); > + } > + So this doesn't really change any return value, but just the error message. But looking at this code I wonder why we don't check b_error first thing after reading the buffer, as checking the magic for a corrupt buffer seems a little pointless.