Re: CLVM error 6 nodes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



You are right Jim,

It's clvm=0, not clvmd=0!!

I wish I could have a document that has all the possible tags, options that I could use on cluster.conf...

I Found one on the Cluster Projetc, but I think its not complete...

Thanks for the hint!

Regards,
Filipe Miranda

On 10/14/06, Jim Parsons <jparsons@xxxxxxxxxx > wrote:
Filipe Miranda wrote:

> Alasdair,
>
>
> We did the modifications but still the same error:
>
> Since the lock servers does not have access to the LUNs that will be
> in use by the GFS I used the option:
>
><clusternode name="lock-node3" votes="1" clvmd="0">
>
Um, I may be out on a limb here, but I know of no clvmd attribute under
clusternode in our cluster.conf schema.

-Jim

> Only the rac-nodes have access to the shared LUNs...
>
>
> [root@rac-node1 ~]# lvcreate -L4G -n lv01 vg01
>
>   Found duplicate PV o2Xf8uUmskTL5fiVQAgY0nJ1ZJSMA9U3: using /dev/sds1
> not /dev/emcpowerb1
>
>   Found duplicate PV 3yduFLdX3FWPWWMb9lIbtBf3JIPjYnHF: using
> /dev/emcpowerc1 not /dev/sdr1
>
>   Found duplicate PV Kyha3qI6nVE4odg77UXf7igS3FenGJNn: using /dev/sdd1
> not /dev/emcpowerq1
>
>   Found duplicate PV WS1LyhqQ8HaE2fIuSnXNd5sgTRtNzNAJ: using /dev/sdt1
> not /dev/emcpowera1
>
>   Found duplicate PV DuBJ7dZsS3PIO7n5U6hINxPkWorZDzvx: using
> /dev/emcpowerd1 not /dev/sdq1
>
>   Found duplicate PV ZECZzAtbA0e9pFbl9oL0lZg4q7fkS5x4: using /dev/sdk1
> not /dev/emcpowerj1
>
>   Found duplicate PV bnVVmL6WhS2mesnOFUkT4fEfR0cFhybD: using
> /dev/emcpowerk1 not /dev/sdj1
>
>   Found duplicate PV XyXrg2zdxxMS5jo03f9I4QYtGM3ILLGV: using /dev/sdl1
> not /dev/emcpoweri1
>
>   Found duplicate PV SLE5v2eTD7cJlpRUDGG35xfXkRbW86i1: using
> /dev/emcpowerl1 not /dev/sdi1
>
>   Found duplicate PV acGyUd2wX7FnOF94Cbt0ombp10iUWMSf: using /dev/sdm1
> not /dev/emcpowerh1
>
>   Found duplicate PV ll8eNZ0JRh9katV0eui4BcxSc6HBggSI: using
> /dev/emcpowerm1 not /dev/sdh1
>
>   Found duplicate PV ptGubq8R16LxywZ458P7ebmdG3Fq2aJo: using /dev/sdn1
> not /dev/emcpowerg1
>
>   Found duplicate PV PLQ3uON7pYe7nY16gRmAP94WBaEydRwf: using
> /dev/emcpowern1 not /dev/sdg1
>
>   Found duplicate PV PsVYTeKNy6EcqWYbJwQ4KEbPp2Q8HjWv: using /dev/sdo1
> not /dev/emcpowerf1
>
>   Found duplicate PV hvekbzDAltJ3t23QveOMz1axfhj9Mp2j: using
> /dev/emcpowero1 not /dev/sdf1
>
>   Found duplicate PV 5OhUbKbZLW5bTc3tpJeU4YlH0dTttJHF: using /dev/sdp1
> not /dev/emcpowere1
>
>   Found duplicate PV dFtPhq6pkwFdl41NTrAguAEFB3601CTb: using
> /dev/emcpowerp1 not /dev/sde1
>
>   Error locking on node lock-node1: Internal lvm error, check syslog
>
>   Error locking on node lock-node2: Internal lvm error, check syslog
>
>   Error locking on node lock-node3: Internal lvm error, check syslog
>
>   Failed to activate new LV.
>
> File:
> /var/log/messages:
>
> Oct 13 21:20:46 lock-node1 lvm[5478]: Volume group for uuid not found:
> UzvBBmBj7m53APMbye1XXztWjdIavfgX8L5rGTOB3i3KGYPazw1AVaGCmWsXZpqR
>
>
> Here is the cluster.conf
>
>[root@rac-node1 ~]# cat /etc/cluster/cluster.conf
><?xml version="1.0
>"?>
><cluster alias="cluster" config_version="6" name="cluster">
>        <fence_daemon post_fail_delay="0" post_join_delay="120"/>
>        <clusternodes>
>
>                <clusternode name="lock-node1" votes="1" clvmd="0">
>                        <fence>
>                                <method name="1">
>
>                                        <device name="lock-node1-fence"/>
>                                </method>
>                        </fence>
>                </clusternode>
>
>                <clusternode name="lock-node2" votes="1" clvmd="0">
>                        <fence>
>
>                                <method name="1">
>                                        <device name="lock-node2-fence"/>
>                                </method>
>                        </fence>
>
>                </clusternode>
>                <clusternode name="lock-node3" votes="1" clvmd="0">
>                        <fence>
>                                <method name="1">
>
>                                        <device name="lock-node3-fence"/>
>                                </method>
>                        </fence>
>                </clusternode>
>
>                <clusternode name="rac-node1" votes="1">
>                        <fence>
>                                <method name="1">
>                                        <device name="rac-node1-fence"/>
>
>                                </method>
>                        </fence>
>                </clusternode>
>                <clusternode name="rac-node2" votes="1">
>
>                        <fence>
>                                <method name="1">
>                                        <device name="rac-node2-fence"/>
>                                </method>
>
>                        </fence>
>                </clusternode>
>                <clusternode name="rac-node3" votes="1">
>                        <fence>
>                                <method name="1">
>
>                                        <device name="rac-node3-fence"/>
>                                </method>
>                        </fence>
>                </clusternode>
>
>        </clusternodes>
>        <gulm>
>                <lockserver name="lock-node1"/>
>                <lockserver name="lock-node2"/>
>                <lockserver name="lock-node3"/>
>
>        </gulm>
>        <fencedevices>
>                <fencedevice agent="fence_ipmilan" auth="none"
>ipaddr="20.20.20.4 < http://20.20.20.4>" login="xxxx" name="lock-node1-fence" passwd="xxxx"/>
>
>                <fencedevice agent="fence_ipmilan" auth="none"
>ipaddr="20.20.20.5 <http://20.20.20.5>" login="xxxx" name="lock-node2-fence" passwd="xxxx"/>
>
>                <fencedevice agent="fence_ipmilan" auth="none"
>ipaddr="20.20.20.6 <http://20.20.20.6>" login="xxxx" name="lock-node3-fence" passwd="xxxx"/>
>
>                <fencedevice agent="fence_ipmilan" auth="none"
>ipaddr="20.20.20.1 <http://20.20.20.1>" login="xxxx" name="rac-node1-fence" passwd="xxxx"/>
>
>                <fencedevice agent="fence_ipmilan" auth="none"
>ipaddr="20.20.20.2 <http://20.20.20.2>" login="xxxx" name="rac-node2-fence" passwd="xxxx"/>
>
>                <fencedevice agent="fence_ipmilan" auth="none"
>ipaddr="20.20.20.3 <http://20.20.20.3>" login="xxxx" name="rac-node3-fence" passwd="xxxx"/>
>
>        </fencedevices>
>        <rm>
>                <failoverdomains/>
>                <resources/>
>        </rm>
></cluster>
>
>
> /etc/lvm/lvm.conf
>
> # By default we accept every block device:
>
>     # filter = [ "a/.*/" ]
>
>     filter = [ "r|/dev/sda|", "a/.*/" ]
>
>
>
> Regards,
> Filipe Miranda
>
> On 10/13/06, *Filipe Miranda* <filipe.miranda@xxxxxxxxx
> <mailto: filipe.miranda@xxxxxxxxx>> wrote:
>
>     Alasdair,
>
>     Do I need to reboot the machine to test this configurations
>     changes? Or is there a way do testing it without rebooting the
>     machine?
>
>     I will try to swap the order of the filters,
>
>     Thanks for the hint
>
>
>     On 10/13/06, *Alasdair G Kergon* < agk@xxxxxxxxxx
>     <mailto:agk@xxxxxxxxxx> > wrote:
>
>         On Fri, Oct 13, 2006 at 06:50:24PM -0300, Filipe Miranda wrote:
>> filter = [ "a/.*/", "r|/dev/sda|" ] (forgot to use the close
>         pipe)
>
>         Still won't work - swap the order of the two items; first matches
>         everything so second isn't looked at.
>         (man lvm.conf)
>
>         Alasdair
>         --
>         agk@xxxxxxxxxx <mailto:agk@xxxxxxxxxx>
>
>         --
>         Linux-cluster mailing list
>         Linux-cluster@xxxxxxxxxx <mailto:Linux-cluster@xxxxxxxxxx>
>         https://www.redhat.com/mailman/listinfo/linux-cluster
>         <https://www.redhat.com/mailman/listinfo/linux-cluster>
>
>
>
>
>     --
>     ---
>     Filipe T Miranda
>     Red Hat Certified Engineer
>
>
>
>
> --
> ---
> Filipe T Miranda
> Red Hat Certified Engineer
>
>
>------------------------------------------------------------------------
>
>--
>Linux-cluster mailing list
>Linux-cluster@xxxxxxxxxx
>https://www.redhat.com/mailman/listinfo/linux-cluster
>



--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster



--
---
Filipe T Miranda
Red Hat Certified Engineer
--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux