Re: Some monitors have still not reached quorum

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The only communication on the cluster network is between osds. All other traffic for clients, mons, MDS, etc is on the public network. The cluster network is what the osds use to backfill, recover, sent replica copies of your data to the secondary osds, read parts of EC objects before the primary sends them to the client, scrubs, and anything else where osds talk to each other. No node needs, or should, have an IP on the cluster network other than nodes with OSDs in them.

On Fri, Feb 23, 2018, 6:43 AM Kevin Olbrich <ko@xxxxxxx> wrote:
I found a fix: It is mandatory to set the public network to the same network the mons use.
Skipping this while the mon has another network interface, saves garbage to the monmap.

- Kevin

2018-02-23 11:38 GMT+01:00 Kevin Olbrich <ko@xxxxxxx>:
I always see this:

[mon01][DEBUG ]     "mons": [
[mon01][DEBUG ]       {
[mon01][DEBUG ]         "addr": "[fd91:462b:4243:47e::1:1]:6789/0",
[mon01][DEBUG ]         "name": "mon01",
[mon01][DEBUG ]         "public_addr": "[fd91:462b:4243:47e::1:1]:6789/0",
[mon01][DEBUG ]         "rank": 0
[mon01][DEBUG ]       },
[mon01][DEBUG ]       {
[mon01][DEBUG ]         "addr": "0.0.0.0:0/1",
[mon01][DEBUG ]         "name": "mon02",
[mon01][DEBUG ]         "public_addr": "0.0.0.0:0/1",
[mon01][DEBUG ]         "rank": 1
[mon01][DEBUG ]       },
[mon01][DEBUG ]       {
[mon01][DEBUG ]         "addr": "0.0.0.0:0/2",
[mon01][DEBUG ]         "name": "mon03",
[mon01][DEBUG ]         "public_addr": "0.0.0.0:0/2",
[mon01][DEBUG ]         "rank": 2
[mon01][DEBUG ]       }
[mon01][DEBUG ]     ]


DNS is working fine and the hostnames are also listed in /etc/hosts.
I already purged the mon but still the same problem.

- Kevin


2018-02-23 10:26 GMT+01:00 Kevin Olbrich <ko@xxxxxxx>:
Hi!

On a new cluster, I get the following error. All 3x mons are connected to the same switch and ping between them works (firewalls disabled).
Mon-nodes are Ubuntu 16.04 LTS on Cep Luminous.


[ceph_deploy.mon][ERROR ] Some monitors have still not reached quorum:
[ceph_deploy.mon][ERROR ] mon03
[ceph_deploy.mon][ERROR ] mon02
[ceph_deploy.mon][ERROR ] mon01


root@adminnode:~# cat ceph.conf
[global]
fsid = 2689defb-8715-47bb-8d78-e862089adf7a
ms_bind_ipv6 = true
mon_initial_members = mon01, mon02, mon03
mon_host = [fd91:462b:4243:47e::1:1],[fd91:462b:4243:47e::1:2],[fd91:462b:4243:47e::1:3]
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
public network = fdd1:ecbd:731f:ee8e::/64
cluster network = fd91:462b:4243:47e::/64


root@mon01:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc pfifo_fast state UP group default qlen 1000
    link/ether b8:ae:ed:e9:b6:61 brd ff:ff:ff:ff:ff:ff
    inet 172.17.1.1/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fd91:462b:4243:47e::1:1/64 scope global
       valid_lft forever preferred_lft forever
    inet6 fe80::baae:edff:fee9:b661/64 scope link
       valid_lft forever preferred_lft forever
3: wlan0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 00:db:df:64:34:d5 brd ff:ff:ff:ff:ff:ff
4: eth0.22@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP group default qlen 1000
    link/ether b8:ae:ed:e9:b6:61 brd ff:ff:ff:ff:ff:ff
    inet6 fdd1:ecbd:731f:ee8e::1:1/64 scope global
       valid_lft forever preferred_lft forever
    inet6 fe80::baae:edff:fee9:b661/64 scope link
       valid_lft forever preferred_lft forever


Don't mind wlan0, thats because this node is built from an Intel NUC.

Any idea?

Kind regards
Kevin


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux