On 12/10/2015 04:00 AM, deeepdish wrote: Hello, [root@b02s08 ~]#[snip] }
Thanks Joao. I had a look but my other 3 monitors are working just fine. To be clear, I’ve confirmed the same behaviour on other monitor nodes that have been removed from the cluster and rebuild with a new IP (however same name). [global] fsid = (hidden) mon_initial_members = smg01, smon01s, smon02s, b02s08 mon_host = 10.20.10.250, 10.20.10.251, 10.20.10.252, 10.20.1.8 public network = 10.20.10.0/24, 10.20.1.0/24 cluster network = 10.20.41.0/24 . . . [mon.smg01s] #host = smg01s.erbus.kupsta.net host = smg01s addr = 10.20.10.250:6789 [mon.smon01s] #host = smon01s.erbus.kupsta.net host = smon01s addr = 10.20.10.251:6789 [mon.smon02s] #host = smon02s.erbus.kupsta.net host = smon02s addr = 10.20.10.252:6789 [mon.b02s08] #host = b02s08.erbus.kupsta.net host = b02s08 addr = 10.20.1.8:6789 # sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.smg01.asok mon_status { "name": "smg01", "rank": 0, "state": "probing", "election_epoch": 0, "quorum": [], "outside_quorum": [ "smg01" ], "extra_probe_peers": [ "10.20.1.8:6789\/0", "10.20.10.251:6789\/0", "10.20.10.252:6789\/0" ], "sync_provider": [], "monmap": { "epoch": 0, "fsid": “(hidden)", "modified": "0.000000", "created": "0.000000", "mons": [ { "rank": 0, "name": "smg01", "addr": "10.20.10.250:6789\/0" }, { "rank": 1, "name": "smon01s", "addr": "0.0.0.0:0\/1" }, { "rank": 2, "name": "smon02s", "addr": "0.0.0.0:0\/2" }, { "rank": 3, "name": "b02s08", "addr": "0.0.0.0:0\/3" } ] } } Processes running on the monitor node that’s in probing state: # ps -ef | grep ceph root 1140 1 0 Dec11 ? 00:05:07 python /usr/sbin/ceph-create-keys --cluster ceph -i smg01 root 6406 1 0 Dec11 ? 00:05:10 python /usr/sbin/ceph-create-keys --cluster ceph -i smg01 root 7712 1 0 Dec11 ? 00:05:09 python /usr/sbin/ceph-create-keys --cluster ceph -i smg01 root 9105 1 0 Dec11 ? 00:05:11 python /usr/sbin/ceph-create-keys --cluster ceph -i smg01 root 13098 30548 0 07:18 pts/1 00:00:00 grep --color=auto ceph root 14243 1 0 Dec11 ? 00:05:09 python /usr/sbin/ceph-create-keys --cluster ceph -i smg01 root 31222 1 0 05:39 ? 00:00:00 /bin/bash -c ulimit -n 32768; /usr/bin/ceph-mon -i smg01 --pid-file /var/run/ceph/mon.smg01.pid -c /etc/ceph/ceph.conf --cluster ceph -f root 31226 31222 1 05:39 ? 00:01:39 /usr/bin/ceph-mon -i smg01 --pid-file /var/run/ceph/mon.smg01.pid -c /etc/ceph/ceph.conf --cluster ceph -f root 31228 1 0 05:39 pts/1 00:00:15 python /usr/sbin/ceph-create-keys --cluster ceph -i smg01 |
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com