Re: storage, libaio, or XFS problem? 3.4.26

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, Sep 07, 2014 at 12:23:03AM -0500, stan hoeppner wrote:
> I have some more information regarding the AIO issue.  I fired up the
> test harness and it ran for 30 hours at 706 MB/s avg write rate, 303
> MB/s per LUN, nearly flawlessly, less than 0.01% buffer loss, and avg IO
> times were less than 0.5 seconds.  Then the app crashed and I found the
> following in dmesg.  I had to "hard reset" the box due to the shrapnel.
>  There are no IO errors of any kind leading up to the forced shutdown.
> I assume the inode update and streamRT-sa hung task traces are a result
> of the forced shutdown, not a cause of it.  In lieu of an xfs_repair
> with a version newer than I'm able to install, any ideas what caused the
> forced shutdown after 30 hours, given there are no errors preceding it?
> 
> 
> Sep  6 06:33:33 Anguish-ssu-1 kernel: [288087.334863] XFS (dm-5):
> xfs_do_force_shutdown(0x8) called from line 3732 of file
> fs/xfs/xfs_bmap.c.  Return address = 0xffffffffa02009a6
> Sep  6 06:33:42 Anguish-ssu-1 kernel: [288096.220920] XFS (dm-5): failed
> to update timestamps for inode 0x2ffc9caae

Hi Stan, can you need to turn off line wrapping for stuff you paste
in? It's all but unreadable when it line wraps like this?

Next, you need to turn /proc/sys/fs/xfs/error_level up to 11 so that
it dumps a stack trace on corruption events. I don't have a (I can't
remember what kernel version you are running) tree in front of me to
convert that line number to something meaningful, so it's not a
great help...

Was there anything in the logs before the shutdown?  i.e. can you
paste the dmesg output from the start of the test (i.e. the mount of
the fs) to the end?

As it is, all the traces looke like this:

> [<ffffffff814f5fd7>] schedule+0x64/0x66
> [<ffffffff814f66ec>] rwsem_down_failed_common+0xdb/0x10d
> [<ffffffff814f6731>] rwsem_down_write_failed+0x13/0x15
> [<ffffffff81261913>] call_rwsem_down_write_failed+0x13/0x20
> [<ffffffff814f5458>] ? down_write+0x25/0x27
> [<ffffffffa01e75e4>] xfs_ilock+0x4f/0xb4 [xfs]
> [<ffffffffa01e40e5>] xfs_rw_ilock+0x2c/0x33 [xfs]
> [<ffffffff814f6ac6>] ? _raw_spin_unlock_irq+0x27/0x32
> [<ffffffffa01e4519>] xfs_file_aio_write_checks+0x41/0xfe [xfs]
> [<ffffffffa01e46ff>] xfs_file_dio_aio_write+0x103/0x1fc [xfs]
> [<ffffffffa01e4ac3>] xfs_file_aio_write+0x152/0x1b5 [xfs]
> [<ffffffffa01e4971>] ? xfs_file_buffered_aio_write+0x179/0x179 [xfs]
> [<ffffffff81133694>] aio_rw_vect_retry+0x85/0x18a
> [<ffffffff8113360f>] ? aio_fsync+0x29/0x29
> [<ffffffff81134c10>] aio_run_iocb+0x7b/0x149
> [<ffffffff81134fe9>] io_submit_one+0x199/0x1f3
> [<ffffffff8113513d>] do_io_submit+0xfa/0x271
> [<ffffffff811352c4>] sys_io_submit+0x10/0x12
> [<ffffffff814fc912>] system_call_fastpath+0x16/0x1b

Which implies that the shutdown didn't unlock the inode correctly.
But without knowing what the call stack at the time of the shutdown
was, I can't really tell...

Cheers,

Dave.
-- 
Dave Chinner
david@xxxxxxxxxxxxx

_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs




[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux