Dodgy Mounting

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

GFS6.0 // RHEL 3.0

Perfectly normal set-up - assembled pools and cluster archives, got
lock_gulmd working, and made mountpoints and entries in /etc/fstab

Mount /archive

# mount /archive
mount: wrong fs type, bad option, bad superblock on /dev/pool/gfs0,
       or too many mounted file systems

What's wrong?!

# for i in pool ccsd lock_gulmd; do service $i status; done
digex_cca is assembled
gfs0 is assembled
gfs1 is assembled
gfs2 is assembled
gfs3 is assembled
ccsd (pid 5587) is running...
lock_gulmd (pid 5632 5629 5626) is running...
gulm_master: bundlesmanagment is the master
Services:
LTPX
LT000

/etc/fstab looks like this:

/dev/pool/gfs0          /archive                gfs     defaults        1 2
/dev/pool/gfs1          /redo                  gfs     defaults        1 2
/dev/pool/gfs2          /data                  gfs     defaults        1 2
/dev/pool/gfs3          /backups               gfs     defaults        1 2

and mount shows this:
# mount
/dev/cciss/c0d0p3 on / type ext3 (rw)
none on /proc type proc (rw)
none on /dev/pts type devpts (rw,gid=5,mode=620)
/dev/cciss/c0d0p1 on /boot type ext3 (rw)
/dev/cciss/c0d0p7 on /local type ext3 (rw)
none on /dev/shm type tmpfs (rw)
/dev/cciss/c0d0p6 on /tmp type ext3 (rw)
/dev/cciss/c0d0p5 on /var type ext3 (rw)

Any ideas?  What's going on?

Thanks!

Steve

--

Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux