On Sat, May 09, 2020 at 10:08:50AM -0700, Christoph Hellwig wrote: > On Sat, May 09, 2020 at 09:29:58AM -0700, Darrick J. Wong wrote: > > @@ -140,11 +140,24 @@ _("can't read %s block %u for inode %" PRIu64 "\n"), > > if (whichfork == XFS_DATA_FORK && > > (nodehdr.magic == XFS_DIR2_LEAFN_MAGIC || > > nodehdr.magic == XFS_DIR3_LEAFN_MAGIC)) { > > + int bad = 0; > > + > > if (i != -1) { > > do_warn( > > _("found non-root LEAFN node in inode %" PRIu64 " bno = %u\n"), > > da_cursor->ino, bno); > > + bad++; > > } > > + > > + /* corrupt leafn node; rebuild the dir. */ > > + if (!bad && > > + (bp->b_error == -EFSBADCRC || > > + bp->b_error == -EFSCORRUPTED)) { > > + do_warn( > > +_("corrupt %s LEAFN block %u for inode %" PRIu64 "\n"), > > + FORKNAME(whichfork), bno, da_cursor->ino); > > + } > > + > > So this doesn't really change any return value, but just the error > message. But looking at this code I wonder why we don't check > b_error first thing after reading the buffer, as checking the magic > for a corrupt buffer seems a little pointless. <shrug> In the first hunk I was merely following what we did for DA_NODE blocks (check magic, then check for corruption errors) but I guess I could just pull that up in the file. I'll have a look and see what happens if I do that. --D