Re: need help how to debug xfs crash issue xfs_iunlink_remove: xfs_inotobp() returned error 22

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



A meager non-expert user question with full ignorance of glusterfs:  Why are you having I/O errors once every two weeks?

This looks like XFS behavior I've seen under 2 conditions: 1) when I test XFS on the device-mapper flakey object, using XFS without an external journal, and 2) when I try to press my hard-drive connectors against the motherboard while the PC is still running.  Your error message looks more like the result of (2) than of (1).

XFS behavior on flakey is not the best, and I wish it would recover in such situations.  In Case (2), I'm fairly sure that the PC is confused on a hardware level because the drive light does not go out.  Then again, seeing the behavior of other file systems that fight through the errors, maybe it's for the best.  If you're fighting I/O errors, there is no winner, and it's best to get rid of the I/O error.

OK, I'm off the soapbox and will quietly wait for a RAID expert like Dave or Stan to jump in and make me feel like a complete amateur...

MIchael

On Tue, Apr 9, 2013 at 9:03 AM, 符永涛 <yongtaofu@xxxxxxxxx> wrote:
BTW
xfs_info /dev/sdb
meta-data=""               isize=256    agcount=28, agsize=268435440 blks
         =                       sectsz=512   attr=2
data     =                       bsize=4096   blocks=7324303360, imaxpct=5
         =                       sunit=16     swidth=160 blks
naming   =version 2              bsize=4096   ascii-ci=0
log      =internal               bsize=4096   blocks=521728, version=2
         =                       sectsz=512   sunit=16 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

2013/4/9, 符永涛 <yongtaofu@xxxxxxxxx>:
> Dear xfs experts,
> I really need your help sincerely!!! In our production enviroment we
> run glusterfs over top of xfs on Dell x720D(Raid 6). And the xfs file
> system crash on some of the server frequently about every two weeks.
> Can you help to give me a direction about how to debug this issue and
> how to avoid it? Thank you very very much!
>
> uname -a
> Linux cqdx.miaoyan.cluster1.node11.qiyi.domain 2.6.32-279.el6.x86_64
> #1 SMP Wed Jun 13 18:24:36 EDT 2012 x86_64 x86_64 x86_64 GNU/Linux
>
> Every time the crash log is same, as following
>
> 038 Apr  9 09:41:36 cqdx kernel: XFS (sdb): xfs_iunlink_remove:
> xfs_inotobp() returned error 22.
> 1039 Apr  9 09:41:36 cqdx kernel: XFS (sdb): xfs_inactive: xfs_ifree
> returned error 22
> 1040 Apr  9 09:41:36 cqdx kernel: XFS (sdb):
> xfs_do_force_shutdown(0x1) called from line 1184 of file
> fs/xfs/xfs_vnodeops.c.  Return address = 0xffffffffa02ee20a
> 1041 Apr  9 09:41:36 cqdx kernel: XFS (sdb): I/O Error Detected.
> Shutting down filesystem
> 1042 Apr  9 09:41:36 cqdx kernel: XFS (sdb): Please umount the
> filesystem and rectify the problem(s)
> 1043 Apr  9 09:41:53 cqdx kernel: XFS (sdb): xfs_log_force: error 5
> returned.
> 1044 Apr  9 09:42:23 cqdx kernel: XFS (sdb): xfs_log_force: error 5
> returned.
> 1045 Apr  9 09:42:53 cqdx kernel: XFS (sdb): xfs_log_force: error 5
> returned.
> 1046 Apr  9 09:43:23 cqdx kernel: XFS (sdb): xfs_log_force: error 5
> returned.
>
> --
> 符永涛
>


--
符永涛

_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs

_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs

[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux