On Tue, Sep 21, 2021 at 03:48:13PM -0700, Rustam Kovhaev wrote: > Hi Fengfei, Eric, > > On Thu, Dec 24, 2020 at 01:35:32PM -0600, Eric Sandeen wrote: > > On 12/24/20 3:51 AM, Fengfei Xi wrote: > > > We have encountered the following problems several times: > > > 1、A raid slot or hardware problem causes block device loss. > > > 2、Continue to issue IO requests to the problematic block device. > > > 3、The system possibly crash after a few hours. > > > > What kernel is this on? > > > > I have a customer that recently hit this issue on 4.12.14-122.74 > SLE12-SP5 kernel. I think need to engage SuSE support and engineering, then, as this is not a kernel supported by upstream devs. I'd be saying the same thing if this was an RHEL frankenkernel, too. > Here is my backtrace: > [965887.179651] XFS (veeamimage0): Mounting V5 Filesystem > [965887.848169] XFS (veeamimage0): Starting recovery (logdev: internal) > [965888.268088] XFS (veeamimage0): Ending recovery (logdev: internal) > [965888.289466] XFS (veeamimage1): Mounting V5 Filesystem > [965888.406585] XFS (veeamimage1): Starting recovery (logdev: internal) > [965888.473768] XFS (veeamimage1): Ending recovery (logdev: internal) > [986032.367648] XFS (veeamimage0): metadata I/O error: block 0x1044a20 ("xfs_buf_iodone_callback_error") error 5 numblks 32 Storage layers returned -EIO a second before things went bad. Whether that is relevant cannot be determined from the information provided. > [986033.152809] BUG: unable to handle kernel NULL pointer dereference at (null) > [986033.152973] IP: xfs_buf_offset+0x2c/0x60 [xfs] > [986033.153013] PGD 0 P4D 0 > [986033.153041] Oops: 0000 [#1] SMP PTI > [986033.153083] CPU: 13 PID: 48029 Comm: xfsaild/veeamim Tainted: P OE 4.12.14-122.74-default #1 SLE12-SP5 And there are unknown proprietary modules loaded, so we can't trust the code in the kernel to be operating correctly... I'm not sure there's really anything upstream developers can help with without any idea of how to reproduce this problem on a current kernel... Cheers, Dave. -- Dave Chinner david@xxxxxxxxxxxxx