Re: GFS volume already mounted or /mnt busy?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Robert,

The other node was previously rebuilt for another temporary purpose and isn't attached to the SAN.  The only thing I can think of that might have been out of the ordinary is that I may have pulled the power on the machine while it was shutting down during some file system operation.  The disk array itself never lost power.

I do have another two machines configured in a different cluster attached to the SAN.  CLVM on machines in the other cluster does show the volume that I am having trouble with though those machines do not mount the device.  Could this have caused the trouble? 

More importantly, is there a way to repair the volume?  I can see the device with fdisk -l and gfs_fsck completes with errors, but mount attempts always fail with the "mount: /dev/etherd/e1.1 already mounted or /gfs busy" error.  I don't know how to debug this at a lower level to understand why this error is happening.  Any pointers?

Here's what I get from vgdisplay:
  --- Volume group ---
  VG Name               gfs_vol2
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  3
  VG Access             read/write
  VG Status             resizable
  Clustered             yes
  Shared                no
  MAX LV                0
  Cur LV                1
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               1.77 TB
  PE Size               4.00 MB
  Total PE              465039
  Alloc PE / Size       445645 / 1.70 TB
  Free  PE / Size       19394 / 75.76 GB
  VG UUID               3ngpos-p9iD-yB5i-vfUp-YQHf-2tVa-vqiSFA

Thanks for your input.  Any help is appreciated!

Tom


On 12/22/06, Robert Peterson <rpeterso@xxxxxxxxxx> wrote:
bigendian+gfs@xxxxxxxxx wrote:
> I had a curios thing happen last night. I have a two-node GFS cluster
> configuration that currently has only one node.  After shutting down
> and restarting the one node, the I couldn't mount my GFS volume
> because it was no longer visible.
>
> The pvdisplay, lvdisplay, and vgdisplay all came up blank.  I was able
> to use pvcreate --restorefile and vgcfgrestore to get the volume
> back.  I then got the following message when trying to mount the volume:
>
> mount: /dev/etherd/e1.1 already mounted or /gfs busy
>
> I was able to gfs_fsck /dev/etherd/e1.1, but I continue to get this
> error.  Running strace on the mount command turns up this error:
>
> mount("/dev/etherd/e1.1", "/gfs", "gfs",
> MS_MGC_VAL|MS_NOATIME|MS_NODIRATIME, "") = -1 EBUSY (Device or
> resource busy)
>
> What could be happening here?
>
> Thanks,
> Tom
Hi Tom,

Hm.  Sounds like something bad happened to the logical volume (i.e. LVM).

Out of curiosity, what was happening on the other node?  It wasn't, by
chance, doing
an install was it?  In the past, I've seen where some versions of the
Anaconda installer
loads the QLogic driver, detects my SAN and offers to automatically
reformat it as
part of the installation.  I hope that didn't happen to you, or if it
did, that you
unchecked the box for your SAN where the eligible drives were listed.

I'd check all the systems that are attached to the SAN, regardless of
whether or
not they're part of the cluster.  See if one of them has done something
unexpected
to the device.

Regards,

Bob Peterson
Red Hat Cluster Suite

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux