Re: XFS corrupt after RAID failure and resync

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Jan 06, 2015 at 05:12:14PM +1100, David Raffelt wrote:
> Hi again,
> Some more information.... the kernel log show the following errors were
> occurring after the RAID recovery, but before I reset the server.
> 

By after the raid recovery, you mean after the two drives had failed out
and 1 hot spare was activated and resync completed? It certainly seems
like something went wrong in this process. The output below looks like
it's failing to read in some inodes. Is there any stack trace output
that accompanies these error messages to confirm?

I suppose I would try to verify that the array configuration looks sane,
but after the hot spare resync and then one or two other drive
replacements (was the hot spare ultimately replaced?), it's hard to say
whether it might be recoverable.

Brian

> Jan 06 00:00:27 server kernel: XFS (md0): Corruption detected. Unmount and
> run xfs_repair
> Jan 06 00:00:27 server kernel: XFS (md0): Corruption detected. Unmount and
> run xfs_repair
> Jan 06 00:00:27 server kernel: XFS (md0): Corruption detected. Unmount and
> run xfs_repair
> Jan 06 00:00:27 server kernel: XFS (md0): metadata I/O error: block
> 0x36b106c00 ("xfs_trans_read_buf_map") error 117 numblks 16
> Jan 06 00:00:27 server kernel: XFS (md0): xfs_imap_to_bp:
> xfs_trans_read_buf() returned error 117.
> 
> 
> Thanks,
> Dave

> _______________________________________________
> xfs mailing list
> xfs@xxxxxxxxxxx
> http://oss.sgi.com/mailman/listinfo/xfs

_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs



[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux