Re: "xfs_log_force: error 5 returned." for drive that was removed.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, May 04, 2016 at 10:48:13AM -0500, Joe Wendt wrote:
>    Thanks for the reply, and I'm sorry for the delay. Another admin
>    rebooted the server before I had a chance to collect more info. I'll
>    take a look at the other thread in case it comes up again. I think
>    we'll avoid the lazy-unmount in the future though.
>    Thanks again!
>    -Joe
> 

It certainly looks the same problem, which should be fixed by the patchset we
are working on, to add a configurable behavior for different kinds of errors.

>    On Mon, Apr 18, 2016 at 1:54 PM, Carlos Maiolino
>    <[1]cmaiolino@xxxxxxxxxx> wrote:
> 
>    On Sun, Apr 17, 2016 at 09:33:27AM -0500, Joe Wendt wrote:
>    >    Hello! This may be a silly question or an interesting one...
>    >    We had a drive fail in a production server, which spawned this
>    error in
>    >    the logs:
>    >    XFS (sde1): xfs_log_force: error 5 returned.
>    >    The dead array was lazy-unmounted, and the drive was hot-swapped,
>    but
>    >    when the RAID array was rebuilt, it came online as /dev/sdk
>    instead of
>    >    /dev/sde.
>    >    Now /dev/sde1 doesn't exist in the system, but we still see this
>    >    message every 30 seconds. I'm assuming a reboot will clear out
>    whatever
>    >    is still trying to access sde1, but I'm trying to avoid that if
>    >    possible. Could someone point me in the direction of what XFS
>    might
>    >    still be trying to do with that device?
>    >    lsof hasn't given me any clues. I can't run xfs_repair on a volume
>    that
>    >    isn't there. I haven't been able to find anything similar yet
>    online.
>    >    Any help would be greatly appreciated!
>    >    Thanks,
>    >    Joe
> 
>      I believe this is the same problem being discussed in this thread:
>      XFS hung task in xfs_ail_push_all_sync() when unmounting FS after
>      disk
>      failure/recovery.
>      Can you get a stack dump of the system (sysrq-t) and post it in some
>      pastebin?
>      > _______________________________________________
>      > xfs mailing list
>      > [2]xfs@xxxxxxxxxxx
>      > [3]http://oss.sgi.com/mailman/listinfo/xfs
>      --
>      Carlos
>      _______________________________________________
>      xfs mailing list
>      [4]xfs@xxxxxxxxxxx
>      [5]http://oss.sgi.com/mailman/listinfo/xfs
> 
> References
> 
>    1. mailto:cmaiolino@xxxxxxxxxx
>    2. mailto:xfs@xxxxxxxxxxx
>    3. http://oss.sgi.com/mailman/listinfo/xfs
>    4. mailto:xfs@xxxxxxxxxxx
>    5. http://oss.sgi.com/mailman/listinfo/xfs

> _______________________________________________
> xfs mailing list
> xfs@xxxxxxxxxxx
> http://oss.sgi.com/mailman/listinfo/xfs


-- 
Carlos

_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs



[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux