I have 1 MDS and 3 OSDs. I installed them via ceph-deploy. (dumpling 0.67.2 version)
At first, It works perfectly. But, after I reboot one of OSD, ceph-mon launched on port 6800 not 6789.
This is a result of 'ceph -s'
---
cluster c59d13fd-c4c9-4cd0-b2ed-b654428b3171
health HEALTH_WARN 1 mons down, quorum 0,1,2 ceph-mds,ceph-osd0,ceph-osd1
monmap e1: 4 mons at {ceph-mds=192.168.13.135:6789/0,ceph-osd0=192.168.13.136:6789/0,ceph-osd1=192.168.13.137:6789/0,ceph-osd2=192.168.13.138:6789/0}, election epoch 206, quorum 0,1,2 ceph-mds,ceph-osd0,ceph-osd1
osdmap e22: 4 osds: 2 up, 2 in
pgmap v67: 192 pgs: 192 active+clean; 145 MB data, 2414 MB used, 18043 MB / 20458 MB avail
mdsmap e4: 1/1/1 up {0=ceph-mds=up:active}
---
1 mons down -> It's running on 6800.
This is a /etc/ceph/ceph.conf that is created automatically by ceph-deploy.
---
[global]
fsid = c59d13fd-c4c9-4cd0-b2ed-b654428b3171
mon_initial_members = ceph-mds, ceph-osd0, ceph-osd1, ceph-osd2
mon_host = 192.168.13.135,192.168.13.136,192.168.13.137,192.168.13.138
auth_supported = cephx
osd_journal_size = 1024
filestore_xattr_use_omap = true
---
According to my understanding, ceph-mon's default port is 6789.
Why does it run on 6800 instead of 6789?
Restarting ceph-mon has a same result.
Sorry for my poor english. I don't write or speak english fluently.
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com