Thanks for the reply, and I'm sorry for the delay. Another admin rebooted the server before I had a chance to collect more info. I'll take a look at the other thread in case it comes up again. I think we'll avoid the lazy-unmount in the future though.
Thanks again!On Mon, Apr 18, 2016 at 1:54 PM, Carlos Maiolino <cmaiolino@xxxxxxxxxx> wrote:
I believe this is the same problem being discussed in this thread:On Sun, Apr 17, 2016 at 09:33:27AM -0500, Joe Wendt wrote:
> Hello! This may be a silly question or an interesting one...
> We had a drive fail in a production server, which spawned this error in
> the logs:
> XFS (sde1): xfs_log_force: error 5 returned.
> The dead array was lazy-unmounted, and the drive was hot-swapped, but
> when the RAID array was rebuilt, it came online as /dev/sdk instead of
> /dev/sde.
> Now /dev/sde1 doesn't exist in the system, but we still see this
> message every 30 seconds. I'm assuming a reboot will clear out whatever
> is still trying to access sde1, but I'm trying to avoid that if
> possible. Could someone point me in the direction of what XFS might
> still be trying to do with that device?
> lsof hasn't given me any clues. I can't run xfs_repair on a volume that
> isn't there. I haven't been able to find anything similar yet online.
> Any help would be greatly appreciated!
> Thanks,
> Joe
XFS hung task in xfs_ail_push_all_sync() when unmounting FS after disk
failure/recovery.
Can you get a stack dump of the system (sysrq-t) and post it in some pastebin?
> _______________________________________________
> xfs mailing list
> xfs@xxxxxxxxxxx
> http://oss.sgi.com/mailman/listinfo/xfs
--
Carlos
_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs
_______________________________________________ xfs mailing list xfs@xxxxxxxxxxx http://oss.sgi.com/mailman/listinfo/xfs