Re: [PATCHv9 00/12] PCI: Recode Mobiveil driver and add PCIe Gen4 driver for NXP Layerscape SoCs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, Feb 29, 2020 at 10:19:07AM -0500, Theodore Y. Ts'o wrote:
> On Sat, Feb 29, 2020 at 11:04:56AM +0000, Russell King - ARM Linux admin wrote:
> > Could it be a race condition, or some problem that's specific to the
> > ARM64 kernel that's provoking this corruption?
> 
> Since I got brought in mid-way through this discussion, can someone
> summarize the vital details of the bughunt?  What kernel version is
> involved, and is this a regression?  If so, what's the last version of
> the kernel where you didn't have a problem on this hardware?

It's a new platform, I've run most 5.x kernels on it, but only recently
have I had a NVMe.  Currently running a 5.5 based kernel (for which I
have to patch in support for the platform), and I've no idea if it is
a regression or not.

> Can you trigger this failure reliably?

No - the very first time I ended up with a corrupted ext4 fs was on the
8th February, and at that time it was put down to the NVMe not being
power-off safe: the machine had crashed sometime over night, resulting
in a section of my network going offline (due to a pause frame storm).
So, I powered it down from crashed state - and from what people tell me,
NVMe _may_ keep blocks unwritten to safe media for a considerable time.

I never bothered to investigate it because the explanation seemed
reasonable, and manually running e2fsck fixed the filesystem.

The system was then booted back into using the NVMe rootfs, and
continued to do so without apparent issue until the 21st Feb, when I
cleanly shut it down, and powered it off.  During the time it was
running, it likely saw many reboots of the 5.5 kernel.

I powered it back on yesterday morning, and this morning it found the
fs corruption while trying to do a logrotate.

As I say in my last email, I suspect it isn't an ext4 bug, but either
a locking implementation issue, coherency issue, or interconnect issue.
The 4k block with the affected inode looks perfectly reasonable with
the only exception that the checksum is incorrect for that one inode -
and other inodes stored in the same 4k block were modified afterwards.
It suggests to me that the writes to update the two 16-bit words
containing the checksum were somehow lost for this particular inode.

> Unfortunately, while I'm regularly running xfstests on x86_64 on a
> Google Compute Engine VM, I'm not doing any runs on arm64.  I can
> certainly build an arm-64.
> 
> There's a test-appliance designed to be run on ARM64 here[1].
> 
> [1] https://kernel.org/pub/linux/kernel/people/tytso/kvm-xfstests/xfstests-amd64.tar.xz

The filename seems to say "amd64" not "arm64" ?

> which is a Debian chroot, designed to be run via android-xfstests[2], but
> if you unpack it, it should be possible to enter the chroot and
> trigger the xfstests run manually on any arm64 system.
> 
> [2] https://thunk.org/android-xfstests
> 
> Does anyone know if kernel CI is running xfstests regularly?

I don't know...

-- 
RMK's Patch system: https://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line in suburbia: sync at 12.1Mbps down 622kbps up
According to speedtest.net: 11.9Mbps down 500kbps up



[Index of Archives]     [DMA Engine]     [Linux Coverity]     [Linux USB]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [Greybus]

  Powered by Linux