Re: easily reproducible filesystem crash on rebuilding array

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Dec 18, 2014 at 04:40:42PM +0100, Emmanuel Florac wrote:
> Le Wed, 17 Dec 2014 06:58:15 +1100
> Dave Chinner <david@xxxxxxxxxxxxx> écrivait:
> 
> > > 
> > > The firmware is the latest available. How do I turn logging to 11
> > > please ?  
> > 
> > # echo 11 > /proc/sys/fs/xfs/error_level
> 
> OK, so now I've set the error level up, I've rerun my test without
> using LVM, and the FS crashed again, this time more seriously. Here's
> the significant exerpt from /var/log/messages:
> 
> Dec 18 03:56:05 TEST-ADAPTEC -- MARK --
> Dec 18 04:00:04 TEST-ADAPTEC kernel: CPU: 0 PID: 1738 Comm: kworker/0:1H Not tainted 3.16.7-storiq64-opteron #1
> Dec 18 04:00:04 TEST-ADAPTEC kernel: Hardware name: Supermicro H8SGL/H8SGL, BIOS 3.0a       05/07/2013
> Dec 18 04:00:04 TEST-ADAPTEC kernel: Workqueue: xfslogd xfs_buf_iodone_work
> Dec 18 04:00:04 TEST-ADAPTEC kernel:  0000000000000000 ffff88040e2d5080 ffffffff814ca287 ffff88040e2d5120
> Dec 18 04:00:04 TEST-ADAPTEC kernel:  ffffffff811fbb0d ffff8800df925940 ffff88040e2d5120 ffff8800df925940
> Dec 18 04:00:04 TEST-ADAPTEC kernel:  ffffffff810705a4 0000000000013f00 000000000deed450 ffff88040deed450
> Dec 18 04:00:04 TEST-ADAPTEC kernel: Call Trace:
> Dec 18 04:00:04 TEST-ADAPTEC kernel:  [<ffffffff814ca287>] ? dump_stack+0x41/0x51
> Dec 18 04:00:04 TEST-ADAPTEC kernel:  [<ffffffff811fbb0d>] ? xfs_buf_iodone_work+0x8d/0xb0
> Dec 18 04:00:04 TEST-ADAPTEC kernel:  [<ffffffff810705a4>] ? process_one_work+0x174/0x420
> Dec 18 04:00:04 TEST-ADAPTEC kernel:  [<ffffffff81070c4b>] ? worker_thread+0x10b/0x500
> Dec 18 04:00:04 TEST-ADAPTEC kernel:  [<ffffffff814cc290>] ? __schedule+0x2e0/0x750

Where's the XFS error output? This is just the output from the
dump_stack() call in the xfs error message code...

Still, that's implying a write IO error being reporte din IO
completion, not a read error, and that's different to the previous
issue you've reported. It's also indicative of an error coming from
the storage, not XFS...

Do these problems *only* happen during or after a RAID rebuild?

> Phase 7 - verify and correct link counts...
> resetting inode 4294866029 nlinks from 2 to 5
> resetting inode 150323855504 nlinks from 13 to 12
> Metadata corruption detected at block 0x10809dc640/0x1000
> libxfs_writebufr: write verifer failed on bno 0x10809dc640/0x1000
> Metadata corruption detected at block 0x10809dc640/0x1000
> libxfs_writebufr: write verifer failed on bno 0x10809dc640/0x1000
> done

I'd suggest you should be upgrading xfsprogs, because that's an
error that shouldn't happen at the end of a repair. If the latest
version (3.2.2) doesn't fix this problem, then please send me a
compressed metadump so I can work out what corruption xfs_repair
isn't fixing properly.

Cheers,

Dave.
-- 
Dave Chinner
david@xxxxxxxxxxxxx

_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs





[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux