Re: [xfstests generic/648] 64k directory block size (-n size=65536) crash on _xfs_buf_ioapply

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



[ FYI, I missed this because I got the fstests list copy first, not
linux-xfs list copy and so it got filtered into fstests, not XFS.
please just send test failures like this to the linux-xfs list -
there is no value in sending them to fstests as well but it can
cause bug reports to "go missing". ]

On Wed, Jan 03, 2024 at 08:35:52PM -0800, Darrick J. Wong wrote:
> On Mon, Dec 25, 2023 at 09:38:54PM +0800, Zorro Lang wrote:
> > On Tue, Dec 19, 2023 at 02:34:20PM +0800, Zorro Lang wrote:
> > > > Also, does "xfs: update dir3 leaf block metadata after swap" fix it?
> > > 
> > > OK, I'll merge and give it a try.
> > 
> > It's still reproducible on xfs-linux for-next branch xfs-6.8-merge-2, which
> > contains the 5759aa4f9560 ("xfs: update dir3 leaf block metadata after swap")
> 
> DOH.  Got a metadump?  I wonder if s390x is more fubar than we used to
> think it was...

I'm betting that the directory corruption is being reported because
the directory block change was not replayed by recovery due to the
bad magic number error. The kernel was configured with
XFS_ASSERT_FATAL=n, so it continued on after the recovery failure
and things went bad when that unrecovered metadata was accessed.

Two things: a magic number mismatch between the buffer and the log
item should cause recovery to fail on production kernels, and what
we need to work out is how the buffer being recovered had a magic
number mismatch with the buf log item.

-Dave.
-- 
Dave Chinner
david@xxxxxxxxxxxxx




[Index of Archives]     [XFS Filesystem Development (older mail)]     [Linux Filesystem Development]     [Linux Audio Users]     [Yosemite Trails]     [Linux Kernel]     [Linux RAID]     [Linux SCSI]


  Powered by Linux