Re: GFS volume already mounted or /mnt busy?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



bigendian+gfs@xxxxxxxxx wrote:
I had a curios thing happen last night. I have a two-node GFS cluster configuration that currently has only one node. After shutting down and restarting the one node, the I couldn't mount my GFS volume because it was no longer visible.

The pvdisplay, lvdisplay, and vgdisplay all came up blank. I was able to use pvcreate --restorefile and vgcfgrestore to get the volume back. I then got the following message when trying to mount the volume:

mount: /dev/etherd/e1.1 already mounted or /gfs busy

I was able to gfs_fsck /dev/etherd/e1.1, but I continue to get this error. Running strace on the mount command turns up this error:

mount("/dev/etherd/e1.1", "/gfs", "gfs", MS_MGC_VAL|MS_NOATIME|MS_NODIRATIME, "") = -1 EBUSY (Device or resource busy)

What could be happening here?

Thanks,
Tom
Hi Tom,

Hm.  Sounds like something bad happened to the logical volume (i.e. LVM).

Out of curiosity, what was happening on the other node? It wasn't, by chance, doing an install was it? In the past, I've seen where some versions of the Anaconda installer loads the QLogic driver, detects my SAN and offers to automatically reformat it as part of the installation. I hope that didn't happen to you, or if it did, that you
unchecked the box for your SAN where the eligible drives were listed.

I'd check all the systems that are attached to the SAN, regardless of whether or not they're part of the cluster. See if one of them has done something unexpected
to the device.

Regards,

Bob Peterson
Red Hat Cluster Suite

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux