Re: mount before xfs_repair hangs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Mar 11, 2020 at 04:11:27PM -0700, Bart Brashers wrote:
> After working fine for 2 days, it happened again. Drives went offline
> for no apparent reason, and a logicaldevice (as arcconf calls them)
> failed. arcconf listed the hard drives as all online by the time I had
> logged on.
> 
> The server connected to the JBOD had rebooted by the time I noticed the problem.
> 
> There are two xfs filesystems on this server. I can mount one of them,
> and ran xfs_repair on it.
> 
> I first tried mounting the other read-only,no-recovery. That worked.
> Trying to mount normally hangs. I see in ps aux | grep mount that it's
> not using CPU. Here's the mount command I gave:
> 
> mount -t xfs -o inode64,logdev=/dev/md/nvme2 /dev/volgrp4TB/lvol4TB
> /export/lvol4TB/
> 
> I did an echo w > /proc/sysrc-trigger while I was watching the
> console, it said "SysRq : Show Blocked State". Here's what the output
> of dmesg looks like, starting with that line. Then it gives blocks
> about what's happening on each CPU, some of which mention "xfs".
> 
> [  228.927915] SysRq : Show Blocked State
> [  228.928525]   task                        PC stack   pid father
> [  228.928605] mount           D ffff96f79a553150     0 11341  11254 0x00000080
> [  228.928609] Call Trace:
> [  228.928617]  [<ffffffffb0b7f1c9>] schedule+0x29/0x70
> [  228.928624]  [<ffffffffb0b7cb51>] schedule_timeout+0x221/0x2d0
> [  228.928626]  [<ffffffffb0b7f57d>] wait_for_completion+0xfd/0x140
> [  228.928633]  [<ffffffffb04da0b0>] ? wake_up_state+0x20/0x20
> [  228.928667]  [<ffffffffc04c599e>] ? xfs_buf_delwri_submit+0x5e/0xf0 [xfs]
> [  228.928682]  [<ffffffffc04c3217>] xfs_buf_iowait+0x27/0xb0 [xfs]
> [  228.928696]  [<ffffffffc04c599e>] xfs_buf_delwri_submit+0x5e/0xf0 [xfs]
> [  228.928712]  [<ffffffffc04f2a9e>] xlog_do_recovery_pass+0x3ae/0x6e0 [xfs]
> [  228.928727]  [<ffffffffc04f2e59>] xlog_do_log_recovery+0x89/0xd0 [xfs]
> [  228.928742]  [<ffffffffc04f2ed1>] xlog_do_recover+0x31/0x180 [xfs]
> [  228.928758]  [<ffffffffc04f3fef>] xlog_recover+0xbf/0x190 [xfs]
> [  228.928772]  [<ffffffffc04e658f>] xfs_log_mount+0xff/0x310 [xfs]
> [  228.928801]  [<ffffffffc04dd1b0>] xfs_mountfs+0x520/0x8e0 [xfs]
> [  228.928814]  [<ffffffffc04e02a0>] xfs_fs_fill_super+0x410/0x550 [xfs]
> [  228.928818]  [<ffffffffb064c893>] mount_bdev+0x1b3/0x1f0
> [  228.928831]  [<ffffffffc04dfe90>] ?
> xfs_test_remount_options.isra.12+0x70/0x70 [xfs]
> [  228.928842]  [<ffffffffc04deaa5>] xfs_fs_mount+0x15/0x20 [xfs]
> [  228.928845]  [<ffffffffb064d1fe>] mount_fs+0x3e/0x1b0
> [  228.928850]  [<ffffffffb066b377>] vfs_kern_mount+0x67/0x110
> [  228.928852]  [<ffffffffb066dacf>] do_mount+0x1ef/0xce0
> [  228.928855]  [<ffffffffb064521a>] ? __check_object_size+0x1ca/0x250
> [  228.928858]  [<ffffffffb062368c>] ? kmem_cache_alloc_trace+0x3c/0x200
> [  228.928860]  [<ffffffffb066e903>] SyS_mount+0x83/0xd0
> [  228.928863]  [<ffffffffb0b8bede>] system_call_fastpath+0x25/0x2a

It's waiting for the metadata writes for recovered changes to
complete. This implies the underlying device is either hung or it
extremely slow. I'd suggest "extremely slow" because it's doing it's
own internal rebuild and may well be blocking new writes until it
has recovered the regions the new writes are being directed at...

This all looks like HW raid controller problems, nothign to do with
linux or the filesystem.

Cheers,

Dave.
-- 
Dave Chinner
david@xxxxxxxxxxxxx



[Index of Archives]     [XFS Filesystem Development (older mail)]     [Linux Filesystem Development]     [Linux Audio Users]     [Yosemite Trails]     [Linux Kernel]     [Linux RAID]     [Linux SCSI]


  Powered by Linux