Hello! This may be a silly question or an interesting one...
We had a drive fail in a production server, which spawned this error in the logs:
XFS (sde1): xfs_log_force: error 5 returned.
The dead array was lazy-unmounted, and the drive was hot-swapped, but when the RAID array was rebuilt, it came online as /dev/sdk instead of /dev/sde.
Now /dev/sde1 doesn't exist in the system, but we still see this message every 30 seconds. I'm assuming a reboot will clear out whatever is still trying to access sde1, but I'm trying to avoid that if possible. Could someone point me in the direction of what XFS might still be trying to do with that device?
lsof hasn't given me any clues. I can't run xfs_repair on a volume that isn't there. I haven't been able to find anything similar yet online. Any help would be greatly appreciated!
Thanks,
Joe
_______________________________________________ xfs mailing list xfs@xxxxxxxxxxx http://oss.sgi.com/mailman/listinfo/xfs