Re: Hassle with clvmd over external network

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Sebastian Walter wrote:
> Dear list,
> 
> I'm trying to set up RHCS and GFS in a cluster which has two network
> memberships, one is internal (eth0, 10.1.0.0/16, dns-names: host.local),
> the other external (eth1, our real-world subnet and dns names). Every
> cluster node has these two interfaces and related ip-numbers in both
> networks.
> 
> When I set up the RHCS and GFS on the local subnet, everything works
> fine (ccsd, cman, clvmd, ... and also gfs mountable volumes). But if I
> try to change cluster.conf to use the real-world addresses (I want to
> use the gfs volumes also outside of the cluster), clvmd always makes
> problems. I followed the faq and changed /etc/init.d/cman to connect
> with -n host.external.dns.com. All hosts are in /etc/hosts. ccsd starts
> well on all clients, as does cman and fenced. But when I try to start
> the clvmd service on all nodes simultaneously, I get errors (Starting
> clvmd: clvmd startup timed out).
> 
> This is what my /proc gives me:
> [root@dtm ~]# cat /proc/cluster/services
> Service          Name                              GID LID State     Code
> Fence Domain:    "default"                          11   2 run       -
> [8 7 6 5 4 3 2 9 10 11 1]
> 
> DLM Lock Space:  "clvmd"                            14   3 join     
> S-6,20,11
> [8 7 6 5 4 3 2 9 10 11 1]
> 
> [root@compute-0-1 ~]# cat /proc/cluster/services
> Service          Name                              GID LID State     Code
> Fence Domain:    "default"                          11   2 run       -
> [2 3 4 5 6 7 9 10 8 11 1]
> 
> DLM Lock Space:  "clvmd"                            14   3 update    U-4,1,1
> [2 3 4 5 6 7 8 9 10 11 1]
> 
> (the second output I get from all the other nodes, I think it depends on
> which host i start the service first on)
> 
> Has anybody an idea how clvmd communicates to each other? cman is doing
> fine... any other experiences? Thanks for any advice...
> 


It'll be waiting for the DLM lockspace creation to complete on all nodes. Have a
look in syslog for DLM messages.

-- 
Patrick

Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street,
Windsor, Berkshire, SL4 ITE, UK.
Registered in England and Wales under Company Registration No. 3798903

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux