Re: Linux-cluster Digest, Vol 64, Issue 42

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 2009-08-27 at 04:00 +1200, linux-cluster-request@xxxxxxxxxx
wrote:
> Send Linux-cluster mailing list submissions to
>         linux-cluster@xxxxxxxxxx
> 
> To subscribe or unsubscribe via the World Wide Web, visit
>         https://www.redhat.com/mailman/listinfo/linux-cluster
> or, via email, send a message with subject or body 'help' to
>         linux-cluster-request@xxxxxxxxxx
> 
> You can reach the person managing the list at
>         linux-cluster-owner@xxxxxxxxxx
> 
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of Linux-cluster digest..."
> 
> 
> Today's Topics:
> 
>    1. Re: NFS client failover (crypto grid)
>    2. Re: NFS client failover (Mike Cardwell)
>    3. Using iscsi devices with scsi reservation and     fence_scsi
>       agent (carlopmart)
>    4. Fencing Logs (Gordan Bobic)
>    5. Re: Apache Cluster (saji george)
>    6. Re: NFS client failover (Mike Cardwell)
>    7. CLVM not activating LVs (Jakov Sosic)
>    8. Re: CLVM not activating LVs (Christine Caulfield)
>    9. 3 node cluster and quorum disk? (Jakov Sosic)
>   10. Re: CLVM not activating LVs (Jakov Sosic)
> 
> 
> ----------------------------------------------------------------------
> 
> Message: 1
> Date: Tue, 25 Aug 2009 16:14:30 -0300
> From: crypto grid <cryptogrid@xxxxxxxxx>
> Subject: Re:  NFS client failover
> To: linux clustering <linux-cluster@xxxxxxxxxx>
> Message-ID:
>         <a9f464b80908251214q33650614p52e449d62ee30d7f@xxxxxxxxxxxxxx>
> Content-Type: text/plain; charset="iso-8859-1"
> 
> On Tue, Aug 25, 2009 at 8:19 AM, Mike Cardwell <
> linux-cluster@xxxxxxxxxxxxxxxxxx> wrote:
> 
> > Hi,
> >
> > I have a small cluster which shares a filesystem via GFS on a SAN,
> with
> > iSCSI. There are a couple of hosts external to this cluster which I
> would
> > like to have access to the GFS filesystem via NFS. I have exported
> the same
> > mountpoint on each of the hosts that have the GFS mount.
> >
> > I can connect via NFS to one of these cluster hosts, but when it
> goes down,
> > I have to unmount it, and then remount against another host. I
> suspect I
> > could do something with a floating IP? but I was wondering if it
> would be
> > possible to do this with client side logic so that the nfs mount
> > automatically moves to a different working host if one stops
> responding?
> 
> 
> I don't see why you want to resolve this from the client side. You
> must
> configure an ip address resource.
> Take a look at this document:
> http://www.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5/pdf/Configuration_Example_-_NFS_Over_GFS.pdf
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL:
> https://www.redhat.com/archives/linux-cluster/attachments/20090825/33fe8333/attachment.html
> 
> ------------------------------
> 
> Message: 2
> Date: Tue, 25 Aug 2009 20:40:33 +0100
> From: Mike Cardwell <linux-cluster@xxxxxxxxxxxxxxxxxx>
> Subject: Re:  NFS client failover
> To: linux-cluster@xxxxxxxxxx
> Message-ID: <4A943E31.3080209@xxxxxxxxxxxxxxxxxx>
> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
> 
> crypto grid wrote:
> 
> >     I have a small cluster which shares a filesystem via GFS on a
> SAN,
> >     with iSCSI. There are a couple of hosts external to this cluster
> >     which I would like to have access to the GFS filesystem via NFS.
> I
> >     have exported the same mountpoint on each of the hosts that have
> the
> >     GFS mount.
> >
> >     I can connect via NFS to one of these cluster hosts, but when it
> >     goes down, I have to unmount it, and then remount against
> another
> >     host. I suspect I could do something with a floating IP? but I
> was
> >     wondering if it would be possible to do this with client side
> logic
> >     so that the nfs mount automatically moves to a different working
> >     host if one stops responding?
> >
> > I don't see why you want to resolve this from the client side.
> 
> I figured that failover would happen more smoothly if the client was
> aware of and in control of what was going on. If the IP suddenly moves
> to another NFS server I don't know how the NFS client will cope with
> that.
> 
> > You must configure an ip address resource.
> 
> Yeah, that's what I was going to try if there wasn't a client side
> solution.
> 
> > Take a look at this document:
> >
> http://www.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5/pdf/Configuration_Example_-_NFS_Over_GFS.pdf
> 
> That document looks very useful. Thank you for pointing it out to me.
> 
> --
> Mike Cardwell - IT Consultant and LAMP developer
> Cardwell IT Ltd. (UK Reg'd Company #06920226) http://cardwellit.com/
> 
> 
> 
> ------------------------------
> 
> Message: 3
> Date: Tue, 25 Aug 2009 21:47:13 +0200
> From: carlopmart <carlopmart@xxxxxxxxx>
> Subject:  Using iscsi devices with scsi reservation and
>         fence_scsi agent
> To: linux clustering <linux-cluster@xxxxxxxxxx>
> Message-ID: <4A943FC1.3060104@xxxxxxxxx>
> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
> 
> Hi all,
> 
>   Somebody knows if can I use an iscsi target (iscsitarget.sf.net)
> with
> scsi fence reservation?? Does it owrks on production environments??
> 
> Thanks
> --
> CL Martinez
> carlopmart {at} gmail {d0t} com
> 
> 
> 
> ------------------------------
> 
> Message: 4
> Date: Tue, 25 Aug 2009 23:43:17 +0100
> From: Gordan Bobic <gordan@xxxxxxxxxx>
> Subject:  Fencing Logs
> To: linux clustering <linux-cluster@xxxxxxxxxx>
> Message-ID: <4A946905.4040002@xxxxxxxxxx>
> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
> 
> I have a really strange problem on one of my clusters. It exhibits all
> signs of fencing being broken, but the fencing agents work when tested
> manually, and I cannot find anything in syslog to even suggest that
> fencing is being attempted by the surviving node (which just locks up
> on
> GFS access until the other node returns).
> 
> Has anybody got any suggestions on how to troubleshoot this?
> 
> The relevant extract from my cluster.conf is:
> 
> <clusternodes>
>          <clusternode name="hades-cls" nodeid="1" votes="1">
> ...
>                  <fence>
>                          <method name = "1">
>                                  <device name = "hades-oob"/>
>                          </method>
>                  </fence>
> ...
>          </clusternode>
>          <clusternode name="persephone-cls" nodeid="2" votes="1">
> ...
>                  <fence>
>                          <method name = "1">
>                                  <device name ="persephone-oob"/>
>                          </method>
>                  </fence>
>          </clusternode>
> </clusternodes>
> <fencedevices>
>          <fencedevice agent="fence_eric" ipaddr="10.1.254.251"
> login="fence" passwd="some_password" name="hades-oob"/>
>          <fencedevice agent="fence_eric" ipaddr="10.1.254.252"
> login="fence" passwd="some_password" name="persephone-oob"/>
> </fencedevices>
> ...
> 
> I have a near identical setup on all my other clusters, so this is
> somewhat baffling. What else could be relevant to this, specifically
> in
> the context of no fencing attempts even showing up in the logs? I have
> set up scores of RHCS clusters and never seen anything like this
> before.
> The only unusual thing about this cluster is that I had to write a
> bespoke fencing agent for the machines, but these test true when I use
> them to down/reboot the machines.
> 
> TIA.
> 
> Gordan
> 
> 
> 
> ------------------------------
> 
> Message: 5
> Date: Wed, 26 Aug 2009 10:47:13 +0530
> From: saji george <george.saji00@xxxxxxxxx>
> Subject: Re:  Apache Cluster
> To: linux clustering <linux-cluster@xxxxxxxxxx>
> Message-ID:
>         <2460eaad0908252217w1c959990i6aca6b67c71ba1a1@xxxxxxxxxxxxxx>
> Content-Type: text/plain; charset=ISO-8859-1
> 
> Hi,
>     I have added the ip, still its not working with the Resource
> Apache server, But the same is working with the Resource script. And
> in script sections only the /etc/init.d/httpd script is working.
> 
> Regards
> Saji George
> 
> On 8/22/09, crypto grid <cryptogrid@xxxxxxxxx> wrote:
> > On Wed, Aug 19, 2009 at 12:54 PM, saji george
> > <george.saji00@xxxxxxxxx>wrote:
> >
> >> Hi
> >>   I have configured the GFS cluster successfully in RHEL 5.3
> >> But I am not able to configure the cluster for apache.while
> enabling the
> >> service I am getting the below error
> >>
> >> # clusvcadm -e webcluster
> >> Local machine trying to enable service:webcluster...Aborted;
> service
> >> failed
> >>
> >> /log/messages
> >>
> >> Aug 19 21:12:49 cmsstorage-web1 clurgmgrd[32766]: <err> #43:
> Service
> >> service:webcluster has failed; can not start.
> >> Aug 19 21:12:49 cmsstorage-web1 clurgmgrd[32766]: <crit> #13:
> Service
> >> service:webcluster failed to stop cleanly
> >>
> >> cluster.conf
> >>
> >> <failoverdomain name="GFS-apache" ordered="1" restricted="1">
> >>                                 <failoverdomainnode
> name="cmsstorage-web3"
> >> priority="3"/>
> >>                                 <failoverdomainnode
> name="cmsstorage-web2"
> >> priority="2"/>
> >>                                 <failoverdomainnode
> name="cmsstorage-web1"
> >> priority="1"/>
> >>                         </failoverdomain>
> >>
> >> <resources>
> >> <apache config_file="conf/httpd.conf" httpd_options=""
> name="webcluster"
> >> server_root="/usr/local/apache/" shutdown_wait="2"/>
> >> </resources>
> >>
> >>  <service autostart="1" domain="GFS-apache" name="webcluster"
> >> recovery="restart">
> >>                         <apache ref="webcluster"/>
> >>                 </service>
> >>
> >> Is there anything missing...
> >
> >
> >
> > You are missing the definition of an ip address in the resource
> section, and
> > after that, the use of the defined ip address into the service
> definition.
> >
> > Regards,
> >
> >
> >
> >>
> >>
> >> Thanks in advance.
> >>
> >> Regards,
> >> Saji George
> >>
> >>
> >> --
> >> Linux-cluster mailing list
> >> Linux-cluster@xxxxxxxxxx
> >> https://www.redhat.com/mailman/listinfo/linux-cluster
> >>
> >
> 
> 
> 
> ------------------------------
> 
> Message: 6
> Date: Wed, 26 Aug 2009 09:55:24 +0100
> From: Mike Cardwell <linux-cluster@xxxxxxxxxxxxxxxxxx>
> Subject: Re:  NFS client failover
> To: linux clustering <linux-cluster@xxxxxxxxxx>
> Message-ID: <4A94F87C.7030708@xxxxxxxxxxxxxxxxxx>
> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
> 
> On 25/08/2009 20:40, Mike Cardwell wrote:
> 
> > I figured that failover would happen more smoothly if the client was
> > aware of and in control of what was going on. If the IP suddenly
> moves
> > to another NFS server I don't know how the NFS client will cope with
> that.
> 
> Well, it seems to cope quite well. The nfs mount "hangs" for a few
> seconds whilst the IP moves from one server to another (unavoidable
> obviously), but it then picks up from where it was. I suspect there
> will
> be file corruption issues with files that are partially written when
> the
> failover happens, but I guess that can't be avoided without a client
> side solution.
> 

NFS seems to handle all this quite well. It has been around a while.
> --
> Mike Cardwell - IT Consultant and LAMP developer
> Cardwell IT Ltd. (UK Reg'd Company #06920226) http://cardwellit.com/
> 
> 
> 
> ------------------------------
> 
> Message: 7
> Date: Wed, 26 Aug 2009 15:19:03 +0200
> From: Jakov Sosic <jakov.sosic@xxxxxxx>
> Subject:  CLVM not activating LVs
> To: linux-cluster@xxxxxxxxxx
> Message-ID: <20090826151903.6e4e7759@xxxxxxxxxxxxxxxxx>
> Content-Type: text/plain; charset=US-ASCII
> 
> Hi! CLVM not activating logical volumes.
> 
> 
> 
> I have a major issues with CLVM. It is not activating volumes in my
> VG's. I have 2 iSCSI volumes, and one SAS volume with 3 VG's. On
> node01, all logical volumes in 1 iSCSI and on SAS are activated. Other
> iSCSI - zero. On node02 same situation. On node03 only lv's from SAS
> are activated. lvm.conf is same on the machines....
> 
> This is very strange, because when I boot the machines, all the
> services are stopped, so logical volumes shouldn't be activated.
> 
> Here is the situation:
> 
> [root@node01 lvm]# vgs
>   VG          #PV #LV #SN Attr   VSize  VFree
>   VolGroupC0    1   7   0 wz--nc  3.41T 1.48T
>   VolGroupC1    1   0   0 wz--nc  3.41T 3.41T
>   VolGroupSAS   1   2   0 wz--nc 20.63G 4.63G
> 
> [root@node01 lvm]# lvs
>   LV            VG          Attr   LSize
>   nered1        VolGroupC0  -wi-a- 200.00G
>   nered2        VolGroupC0  -wi-a- 200.00G
>   nered3        VolGroupC0  -wi-a-   1.46T
>   nered4        VolGroupC0  -wi-a-  20.00G
>   nered5        VolGroupC0  -wi-a-  20.00G
>   nered6        VolGroupC0  -wi-a-  20.00G
>   nered7        VolGroupC0  -wi-a-  20.00G
>   sasnered0     VolGroupSAS -wi-a-   8.00G
>   sasnered1     VolGroupSAS -wi-a-   8.00G                    
> 
> [root@node03 cache]# vgs
>   VG          #PV #LV #SN Attr   VSize  VFree
>   VolGroupC0    1   0   0 wz--nc  3.41T 3.41T
>   VolGroupC1    1   0   0 wz--nc  3.41T 3.41T
>   VolGroupSAS   1   2   0 wz--nc 20.63G 4.63G
> 
> [root@node03 lvm]# lvs
>   LV          VG          Attr   LSize Origin
>   sasnered0   VolGroupSAS -wi-a-   8.00G
>   sasnered1   VolGroupSAS -wi-a-   8.00G
> 
> 
> here is my lvm.conf:
> 
> [root@node01 lvm]# lvm dumpconfig
>   devices {
>         dir="/dev"
>         scan="/dev"
>         preferred_names=[]
>         filter=["a|^/dev/mapper/controller0$|",
> "a|^/dev/mapper/controller1$|", "a|^/dev/mapper/sas-xen$|", "r|.*|"]
>         cache_dir="/etc/lvm/cache"
>         cache_file_prefix=""
>         write_cache_state=0
>         sysfs_scan=1
>         md_component_detection=1
>         md_chunk_alignment=1
>         ignore_suspended_devices=0
>   }
>   dmeventd {
>         mirror_library="libdevmapper-event-lvm2mirror.so"
>         snapshot_library="libdevmapper-event-lvm2snapshot.so"
>   }
>   activation {
>         missing_stripe_filler="error"
>         reserved_stack=256
>         reserved_memory=8192
>         process_priority=-18
>         mirror_region_size=512
>         readahead="auto"
>         mirror_log_fault_policy="allocate"
>         mirror_device_fault_policy="remove"
>   }
>   global {
>         library_dir="/usr/lib64"
>         umask=63
>         test=0
>         units="h"
>         activation=1
>         proc="/proc"
>         locking_type=3
>         fallback_to_clustered_locking=1
>         fallback_to_local_locking=1
>         locking_dir="/var/lock/lvm"
>   }
>   shell {
>         history_size=100
>   }
>   backup {
>         backup=1
>         backup_dir="/etc/lvm/backup"
>         archive=1
>         archive_dir="/etc/lvm/archive"
>         retain_min=10
>         retain_days=30
>   }
>   log {
>         verbose=0
>         syslog=1
>         overwrite=0
>         level=0
>         indent=1
>         command_names=0
>         prefix="  "
>   }
> 
> 
> Note that logical volumes form C1 were present on node01 and node02,
> but after the node03 joined cluster they dissapeared. I'm running
> CentOS 5.3.
> 
> This is really dissapointing. Enterprise Linux? Linux maybe but not
> Enterprise... After much trouble with linux dm-multipath issues with
> my
> storage - which are unresolved and are waiting for RHEL 5.4, now
> clvmd.
> 
> Note that locking (DLM), cman, rgmanager, qdisk and all the other
> cluster services are working without problems. I just don't get it why
> is CLVM behaving this way?
> 
> I'm thinking about switching to non-clustered LVM - but are there
> issues with possible corruption of metadata? I won't create any new
> volumes nor snapshots or anything similar. Setup is done and it should
> work like this for the extended period of time.... But are there
> issues
> with activation or something else changing metadata?
> 
> 
> 
> --
> |    Jakov Sosic    |    ICQ: 28410271    |   PGP: 0x965CAE2D   |
> =================================================================
> | start fighting cancer -> http://www.worldcommunitygrid.org/   |
> 
> 
> 
> ------------------------------
> 
> Message: 8
> Date: Wed, 26 Aug 2009 15:09:11 +0100
> From: Christine Caulfield <ccaulfie@xxxxxxxxxx>
> Subject: Re:  CLVM not activating LVs
> To: linux clustering <linux-cluster@xxxxxxxxxx>
> Message-ID: <4A954207.9000609@xxxxxxxxxx>
> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
> 
> On 26/08/09 14:19, Jakov Sosic wrote:
> > Hi! CLVM not activating logical volumes.
> >
> >
> >
> > I have a major issues with CLVM. It is not activating volumes in my
> > VG's. I have 2 iSCSI volumes, and one SAS volume with 3 VG's. On
> > node01, all logical volumes in 1 iSCSI and on SAS are activated.
> Other
> > iSCSI - zero. On node02 same situation. On node03 only lv's from SAS
> > are activated. lvm.conf is same on the machines....
> >
> > This is very strange, because when I boot the machines, all the
> > services are stopped, so logical volumes shouldn't be activated.
> >
> > Here is the situation:
> >
> > [root@node01 lvm]# vgs
> >    VG          #PV #LV #SN Attr   VSize  VFree
> >    VolGroupC0    1   7   0 wz--nc  3.41T 1.48T
> >    VolGroupC1    1   0   0 wz--nc  3.41T 3.41T
> >    VolGroupSAS   1   2   0 wz--nc 20.63G 4.63G
> >
> > [root@node01 lvm]# lvs
> >    LV            VG          Attr   LSize
> >    nered1        VolGroupC0  -wi-a- 200.00G
> >    nered2        VolGroupC0  -wi-a- 200.00G
> >    nered3        VolGroupC0  -wi-a-   1.46T
> >    nered4        VolGroupC0  -wi-a-  20.00G
> >    nered5        VolGroupC0  -wi-a-  20.00G
> >    nered6        VolGroupC0  -wi-a-  20.00G
> >    nered7        VolGroupC0  -wi-a-  20.00G
> >    sasnered0     VolGroupSAS -wi-a-   8.00G
> >    sasnered1     VolGroupSAS -wi-a-   8.00G
> >
> > [root@node03 cache]# vgs
> >    VG          #PV #LV #SN Attr   VSize  VFree
> >    VolGroupC0    1   0   0 wz--nc  3.41T 3.41T
> >    VolGroupC1    1   0   0 wz--nc  3.41T 3.41T
> >    VolGroupSAS   1   2   0 wz--nc 20.63G 4.63G
> >
> > [root@node03 lvm]# lvs
> >    LV          VG          Attr   LSize Origin
> >    sasnered0   VolGroupSAS -wi-a-   8.00G
> >    sasnered1   VolGroupSAS -wi-a-   8.00G
> >
> >
> > here is my lvm.conf:
> >
> > [root@node01 lvm]# lvm dumpconfig
> >    devices {
> >       dir="/dev"
> >       scan="/dev"
> >       preferred_names=[]
> >       filter=["a|^/dev/mapper/controller0$|",
> > "a|^/dev/mapper/controller1$|", "a|^/dev/mapper/sas-xen$|", "r|.*|"]
> >       cache_dir="/etc/lvm/cache"
> >       cache_file_prefix=""
> >       write_cache_state=0
> >       sysfs_scan=1
> >       md_component_detection=1
> >       md_chunk_alignment=1
> >       ignore_suspended_devices=0
> >    }
> >    dmeventd {
> >       mirror_library="libdevmapper-event-lvm2mirror.so"
> >       snapshot_library="libdevmapper-event-lvm2snapshot.so"
> >    }
> >    activation {
> >       missing_stripe_filler="error"
> >       reserved_stack=256
> >       reserved_memory=8192
> >       process_priority=-18
> >       mirror_region_size=512
> >       readahead="auto"
> >       mirror_log_fault_policy="allocate"
> >       mirror_device_fault_policy="remove"
> >    }
> >    global {
> >       library_dir="/usr/lib64"
> >       umask=63
> >       test=0
> >       units="h"
> >       activation=1
> >       proc="/proc"
> >       locking_type=3
> >       fallback_to_clustered_locking=1
> >       fallback_to_local_locking=1
> >       locking_dir="/var/lock/lvm"
> >    }
> >    shell {
> >       history_size=100
> >    }
> >    backup {
> >       backup=1
> >       backup_dir="/etc/lvm/backup"
> >       archive=1
> >       archive_dir="/etc/lvm/archive"
> >       retain_min=10
> >       retain_days=30
> >    }
> >    log {
> >       verbose=0
> >       syslog=1
> >       overwrite=0
> >       level=0
> >       indent=1
> >       command_names=0
> >       prefix="  "
> >    }
> >
> >
> > Note that logical volumes form C1 were present on node01 and node02,
> > but after the node03 joined cluster they dissapeared. I'm running
> > CentOS 5.3.
> >
> > This is really dissapointing. Enterprise Linux? Linux maybe but not
> > Enterprise... After much trouble with linux dm-multipath issues with
> my
> > storage - which are unresolved and are waiting for RHEL 5.4, now
> clvmd.
> >
> > Note that locking (DLM), cman, rgmanager, qdisk and all the other
> > cluster services are working without problems. I just don't get it
> why
> > is CLVM behaving this way?
> >
> > I'm thinking about switching to non-clustered LVM - but are there
> > issues with possible corruption of metadata? I won't create any new
> > volumes nor snapshots or anything similar. Setup is done and it
> should
> > work like this for the extended period of time.... But are there
> issues
> > with activation or something else changing metadata?
> >
> 
> 
> You need to mark the shared VGs clustered using the command
> 
> # vgchange -cy <VGname>
> 
> 
> If you created them while clvmd was active then this is the default.
> If
> not then you will have to add it yourself as above.
> 
> 
> Chrissie
> 
> 
> 
> 
> ------------------------------
> 
> Message: 9
> Date: Wed, 26 Aug 2009 16:11:28 +0200
> From: Jakov Sosic <jakov.sosic@xxxxxxx>
> Subject:  3 node cluster and quorum disk?
> To: linux-cluster@xxxxxxxxxx
> Message-ID: <20090826161128.1e32721c@xxxxxxxxxxxxxxxxx>
> Content-Type: text/plain; charset=US-ASCII
> 
> Hi.
> 
> I have a situation - when two nodes are up in 3 node cluster, and one
> node goes down, cluster looses quorate - although I'm using qdiskd...
> 
> I think that problem is in switching qdisk master from one node to
> another. In that case, rgmanager disables all running services, which
> is
> not acceptable situation. Services are currently set to
> autostart="0" because cluster is in evaluation phase.
> 
> Here is my config:
> 
> <?xml version="1.0"?>
> <cluster alias="cluster-c00" config_version="56" name="cluster-c00">
>         <fence_daemon post_fail_delay="0" post_join_delay="120"/>
>         <!-- DEFINE CLUSTER NODES, AND FENCE DEVICES  -->
>         <clusternodes>
>                 <clusternode name="node01" nodeid="1" votes="1">
>                         <fence>
>                                 <method name="1">
>                                         <device name="node01-ipmi"/>
>                                 </method>
>                         </fence>
>                 </clusternode>
>                 <clusternode name="node02" nodeid="2" votes="1">
>                         <fence>
>                                 <method name="1">
>                                         <device name="node02-ipmi"/>
>                                 </method>
>                         </fence>
>                 </clusternode>
>                 <clusternode name="node03" nodeid="3" votes="1">
>                         <fence>
>                                 <method name="1">
>                                         <device name="node03-ipmi"/>
>                                 </method>
>                         </fence>
>                 </clusternode>
>         </clusternodes>
> 
>         <!-- DEFINE CLUSTER MANAGER BEHAVIOUR -->
>         <cman expected_votes="3" deadnode_timeout="80"/>
> <!--            <multicast addr="224.0.0.1"/>   </cman> -->
> 
>         <!-- Token -->
>         <totem token="55000"/>
> 
>         <!-- Quorum Disk -->
>         <quorumd interval="5" tko="5" votes="2"
>         label="SAS-qdisk" status_file="/tmp/qdisk"/>
> 
>         <!-- DEFINE FENCE DEVICES -->
>         <fencedevices>
>                 <fencedevice agent="fence_ipmilan" auth="password"
>         ipaddr="" login="" passwd="" name="node01-ipmi"/>
>                 <fencedevice agent="fence_ipmilan" auth="password"
>         ipaddr="" login="" passwd="" name="node02-ipmi"/>
>                 <fencedevice agent="fence_ipmilan" auth="password"
>         ipaddr="" login="" passwd="" name="node03-ipmi"/>
>         </fencedevices>
> 
> </cluster>
> 
> Should I change any of the timeouts?
> 
> 
> 
> 
> 
> 
> --
> |    Jakov Sosic    |    ICQ: 28410271    |   PGP: 0x965CAE2D   |
> =================================================================
> | start fighting cancer -> http://www.worldcommunitygrid.org/   |
> 
> 
> 
> ------------------------------
> 
> Message: 10
> Date: Wed, 26 Aug 2009 16:13:23 +0200
> From: Jakov Sosic <jakov.sosic@xxxxxxx>
> Subject: Re:  CLVM not activating LVs
> To: linux-cluster@xxxxxxxxxx
> Message-ID: <20090826161323.41bef066@xxxxxxxxxxxxxxxxx>
> Content-Type: text/plain; charset=US-ASCII
> 
> On Wed, 26 Aug 2009 15:09:11 +0100
> Christine Caulfield <ccaulfie@xxxxxxxxxx> wrote:
> 
> > You need to mark the shared VGs clustered using the command
> >
> > # vgchange -cy <VGname>
> >
> >
> > If you created them while clvmd was active then this is the default.
> > If not then you will have to add it yourself as above.
> 
> They were created as clustered.
> 
> [root@node03 ~]# vgchange -cy VolGroupC0
>   Volume group "VolGroupC0" is already clustered
> [root@node03 ~]# vgchange -cy VolGroupC1
>   Volume group "VolGroupC1" is already clustered
> [root@node03 ~]# vgchange -cy VolGroupSAS
>   Volume group "VolGroupSAS" is already clustered
> 
> 
> 
> --
> |    Jakov Sosic    |    ICQ: 28410271    |   PGP: 0x965CAE2D   |
> =================================================================
> | start fighting cancer -> http://www.worldcommunitygrid.org/   |
> 
> 
> 
> ------------------------------
> 
> --
> Linux-cluster mailing list
> Linux-cluster@xxxxxxxxxx
> https://www.redhat.com/mailman/listinfo/linux-cluster
> 
> End of Linux-cluster Digest, Vol 64, Issue 42
> *********************************************
> 
> 

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux