This is shared storage correct? have you tried the pvscan/vgscan/lvscan dance? did you create the vg with the -c y ? -luis Luis E. Cerezo Global IT GV: +1 412 223 7396 On Sep 10, 2009, at 3:47 PM, James Marcinek wrote: > it turns out after my initial issues I turned off clvmd on both > nodes. One of them comes up nice but the other hangs... I'm going to > boot in runlevel 1 and check my lvm stuff this might be the root > cause (hoping) of why it's not they're not becoming members (one > sees the phantom lvm and the other does not) > > > ----- Original Message ----- > From: "Luis Cerezo" <Luis.Cerezo@xxxxxxx> > To: "linux clustering" <linux-cluster@xxxxxxxxxx> > Sent: Thursday, September 10, 2009 3:58:35 PM GMT -05:00 US/Canada > Eastern > Subject: Re: EXT3 or GFS shared disk > > you really got to the cluster in quorum before the lvm to work nicely. > > what is the output of clustat? > > do you have clvmd on both nodes up and running? > > did you run pvscan/vgscan/lvscan after initializing the volume? > > what did vgdisplay say? what it set to not avail etc? > > -luis > > Luis E. Cerezo > Global IT > GV: +1 412 223 7396 > > On Sep 10, 2009, at 2:38 PM, James Marcinek wrote: > >> I'm running 5.3 and it gave me a locking issue and indicated that it >> couldn't create the logical volume. However it showed up and I had >> some issues getting rid of it. >> >> I couldn't get rid of the LVM because it couldn't locate the id. In >> the end I rebooted the node and then I could get rid of it... >> >> I would prefer to use logical volumes if possible. The packages are >> there, one of the nodes did have an issue with the clvmd not >> starting... >> >> I'm working on rebuilding the cluster.conf. It kept coming up as in >> the config but 'not a member'. When I went into system-config- >> cluster on one node it showed up as a member but not the other. Went >> to the other node and the same thing, it was a member but not the >> other. >> >> Right now, I've totally scratched the original cluster.conf. I >> created a new one and copied it to the second node. started cman and >> rgmanager and the same thing they both are in the configs but only >> one member in the cluster management tab... >> >> What's going on? >> >> ----- Original Message ----- >> From: "Luis Cerezo" <Luis.Cerezo@xxxxxxx> >> To: "linux clustering" <linux-cluster@xxxxxxxxxx> >> Sent: Thursday, September 10, 2009 3:14:00 PM GMT -05:00 US/Canada >> Eastern >> Subject: Re: EXT3 or GFS shared disk >> >> what grief did it give you? also- what version of RHEL are you >> running? >> >> 5.1 has some known issues with clvmd >> >> -luis >> >> Luis E. Cerezo >> Global IT >> GV: +1 412 223 7396 >> >> On Sep 10, 2009, at 12:37 PM, James Marcinek wrote: >> >>> Hello again, >>> >>> Next question. >>> >>> Again since my cluster class (back in '04) GFS wasn't around so I'm >>> not sure if I should use this or not in the cluster build... >>> >>> If I have an active/passive cluster where only one node needs to >>> have access to the file system at a given time should I just use an >>> ext3 partition or should I use GFS on a logical volume? >>> >>> I just tried to create an lvm shared logical volume with an ext3 >>> parition (already did an lvmconf --enable-cluster) but it caused me >>> some grief and I switched it to a partition after cleaning up... >>> >>> Thanks, >>> >>> James >>> >>> -- >>> Linux-cluster mailing list >>> Linux-cluster@xxxxxxxxxx >>> https://www.redhat.com/mailman/listinfo/linux-cluster >> >> >> This e-mail, including any attachments and response string, may >> contain proprietary information which is confidential and may be >> legally privileged. It is for the intended recipient only. If you >> are not the intended recipient or transmission error has misdirected >> this e-mail, please notify the author by return e-mail and delete >> this message and any attachment immediately. If you are not the >> intended recipient you must not use, disclose, distribute, forward, >> copy, print or rely on this e-mail in any way except as permitted by >> the author. >> >> -- >> Linux-cluster mailing list >> Linux-cluster@xxxxxxxxxx >> https://www.redhat.com/mailman/listinfo/linux-cluster >> >> -- >> Linux-cluster mailing list >> Linux-cluster@xxxxxxxxxx >> https://www.redhat.com/mailman/listinfo/linux-cluster > > > This e-mail, including any attachments and response string, may > contain proprietary information which is confidential and may be > legally privileged. It is for the intended recipient only. If you > are not the intended recipient or transmission error has misdirected > this e-mail, please notify the author by return e-mail and delete > this message and any attachment immediately. If you are not the > intended recipient you must not use, disclose, distribute, forward, > copy, print or rely on this e-mail in any way except as permitted by > the author. > > -- > Linux-cluster mailing list > Linux-cluster@xxxxxxxxxx > https://www.redhat.com/mailman/listinfo/linux-cluster > > -- > Linux-cluster mailing list > Linux-cluster@xxxxxxxxxx > https://www.redhat.com/mailman/listinfo/linux-cluster This e-mail, including any attachments and response string, may contain proprietary information which is confidential and may be legally privileged. It is for the intended recipient only. If you are not the intended recipient or transmission error has misdirected this e-mail, please notify the author by return e-mail and delete this message and any attachment immediately. If you are not the intended recipient you must not use, disclose, distribute, forward, copy, print or rely on this e-mail in any way except as permitted by the author. -- Linux-cluster mailing list Linux-cluster@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/linux-cluster