Wait now... vgchange -aln on all of the other nodes perhaps? Mike Yet another thing I don't understand here... Failed to save logical volume : /dev/sdb: read failed after 0 of 4096 at 0: Input/output error /dev/sdc: read failed after 0 of 4096 at 0: Input/output error /dev/sdd: read failed after 0 of 4096 at 0: Input/output error clvmd not running on node cweb93 clvmd not running on node cweb94 clvmd not running on node sq59 clvmd not running on node qm250 Failed to activate new LV. First of all, there are a total of 8 nodes on this cluster. So, why are these the only one's complaining? Second, why are they complaining at all? I am trying to re-create the volumes on another server. I can't get that server up without the other nodes up. Now I can't get the volumes created without the other nodes up. Talk about confusing. Mike On Wed, 24 Jan 2007 11:15:28 -0600, Jonathan E Brassow wrote: > are you using fibre channel fencing in your cluster? > > brassow > > On Jan 23, 2007, at 11:23 AM, isplist@xxxxxxxxxxxx wrote: > >> Is it possible that all of my storage was trashed in some way, that the >> devices themselves need to be reformatted? >> >> I'm using external RAID storage devices and I've noticed that there >> seems to >> be a bad magic number error when I try to run the usual tools such as >> fsck or >> e2fsck. >> >> # ./fsck /dev/sdg1 >> fsck 1.35 (28-Feb-2004) >> e2fsck 1.35 (28-Feb-2004) >> Couldn't find ext2 superblock, trying backup blocks... >> fsck.ext2: Bad magic number in super-block while trying to open >> /dev/sdg1 >> >> # ./e2fsck -b 8193 /dev/sdg1 >> e2fsck 1.35 (28-Feb-2004) >> ./e2fsck: Bad magic number in super-block while trying to open >> /dev/sdg1 >> >> Is this why nothing is working and I'm getting seg errors on >> everything? >> >> Mike >> >> >> -- >> Linux-cluster mailing list >> Linux-cluster@xxxxxxxxxx >> https://www.redhat.com/mailman/listinfo/linux-cluster -- Linux-cluster mailing list Linux-cluster@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/linux-cluster