Re: Corruption of in-memory data (0x8) detected at xfs_defer_finish_noroll on kernel 6.3

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, May 23, 2023 at 04:32:11PM -0500, Justin Forbes wrote:
> On Wed, May 03, 2023 at 09:13:18AM +1000, Dave Chinner wrote:
> > On Tue, May 02, 2023 at 05:13:09PM -0500, Mike Pastore wrote:
> > > On Tue, May 2, 2023, 5:03 PM Dave Chinner <david@xxxxxxxxxxxxx> wrote:
> > > 
> > > >
> > > > If you can find a minimal reproducer, that would help a lot in
> > > > diagnosing the issue.
> > > >
> > > 
> > > This is great, thank you. I'll get to work.
> > > 
> > > One note: the problem occured with and without crc=0, so we can rule that
> > > out at least.
> > 
> > Yes, I noticed that. My point was more that we have much more
> > confidence in crc=1 filesystems because they have much more robust
> > verification of the on-disk format and won't fail log recovery in
> > the way you noticed. The verification with crc=1 configured
> > filesystems is also known to catch issues caused by
> > memory corruption more frequently, often preventing such occurrences
> > from corrupting the on-disk filesystem.
> > 
> > Hence if you are seeing corruption events, you really want to be
> > using "-m crc=1" (default config) filesystems...
> 
> Upon trying to roll out 6.3.3 to Fedora users, it seems that we have a
> few hitting this reliabily with 6.3 kernels.  It is certainly not all
> users of XFS though, as I use it extensively and haven't run across it.

Has anyone who is hitting this bisected the failure to a commit
between 6.2 and 6.3?  Has anyone who is hitting it tried a 6.4-rc3
kernel to see if the problem is already fixed?

> The most responsive users who can reproduce all seem to be running on
> xfs filesystems that were created a few years ago, and some even can't
> reproduce it on their newer systems.  Either way, it is a widespread
> enough problem that I can't roll out 6.3 kernels to stable releases
> until it is fixed.
>
> https://bugzilla.redhat.com/show_bug.cgi?id=2208553

I only see one person reporting the issue in that bug, but you
implied that it is a widespread and easily reproducable issue. Where
can I find all the other bug reports so I can look through them for
hints as to what might be causing this?

Right now I only have two individual reports of the issue - the OP
and the user that reported the above bug.  Both are a shutdown has
occurred due to a metadata corruption being detected when reading
metadata, followed by a shutdown in recovery caused by reading an
inode buffer that doesn't actually contain inodes.

Both reports are from filesystems on LVM, both likely have stripe
units defined. The fedora case is on RAID5+LVM, no idea what the OP
was using.  Neither reports give us a workload description that we
can use to attempt to reproduce this.

Given that it's not widespread (i.e. only a small quantity of users
are seeing this issue) and we have very little details to go on, we
can't even be certain that the corruption is a result of an XFS
issue - it may be a problem in the layers below XFS (lvm, md raid,
drivers, etc) and XFS is simply the first thing to trip over it...

We really need more information to make any progress here. Can you
ask everyone who has reported the issue to you to supply us with
with their full hardware config (CPU, memory, storage devices,
hardware RAID cache settings, storage configuration, lvm/crtyp/md
setup, filesystem configuration (xfs_info), mount options, etc) as
well as what they are doing on their machines and what workloads are
running in the background when the problem manifests.

We need to work out how to reproduce this issue so we can triage it,
but right now we have nothing we can actually work with....

-Dave.
-- 
Dave Chinner
david@xxxxxxxxxxxxx



[Index of Archives]     [XFS Filesystem Development (older mail)]     [Linux Filesystem Development]     [Linux Audio Users]     [Yosemite Trails]     [Linux Kernel]     [Linux RAID]     [Linux SCSI]


  Powered by Linux