You're right. I checked the other servers, and noticed the server that has problem is just one. the others are all right.
I can't figure what is different. All servers are copied from one VM. I don't know why that happended.
I'll install all again from the scratch, and will report on this thread after.
I've been watching my every step since 1995.
http://kirrie.pe.kr
http://kirrie.pe.kr
2013/9/3 Joao Eduardo Luis <joao.luis@xxxxxxxxxxx>
On 09/03/2013 02:02 AM, 이주헌 wrote:This has been a recurrent issue I've been completely unable to reproduce so far.
Hi all.
I have 1 MDS and 3 OSDs. I installed them via ceph-deploy. (dumpling
0.67.2 version)
At first, It works perfectly. But, after I reboot one of OSD, ceph-mon
launched on port 6800 not 6789.
Are you able to reproduce this reliably?
Could you share the steps you took leading you to this state?
-Joao
<http://192.168.13.135:6789/0,ceph-osd0=192.168.13.136:6789/0,ceph-osd1=192.168.13.137:6789/0,ceph-osd2=192.168.13.138:6789/0>},
This is a result of 'ceph -s'
---
cluster c59d13fd-c4c9-4cd0-b2ed-b654428b3171
health HEALTH_WARN 1 mons down, quorum 0,1,2
ceph-mds,ceph-osd0,ceph-osd1
monmap e1: 4 mons at
{ceph-mds=192.168.13.135:6789/0,ceph-osd0=192.168.13.136:6789/0,ceph-osd1=192.168.13.137:6789/0,ceph-osd2=192.168.13.138:6789/0
_______________________________________________
election epoch 206, quorum 0,1,2 ceph-mds,ceph-osd0,ceph-osd1
osdmap e22: 4 osds: 2 up, 2 in
pgmap v67: 192 pgs: 192 active+clean; 145 MB data, 2414 MB used,
18043 MB / 20458 MB avail
mdsmap e4: 1/1/1 up {0=ceph-mds=up:active}
---
1 mons down -> It's running on 6800.
This is a /etc/ceph/ceph.conf that is created automatically by ceph-deploy.
---
[global]
fsid = c59d13fd-c4c9-4cd0-b2ed-b654428b3171
mon_initial_members = ceph-mds, ceph-osd0, ceph-osd1, ceph-osd2
mon_host = 192.168.13.135,192.168.13.136,192.168.13.137,192.168.13.138
auth_supported = cephx
osd_journal_size = 1024
filestore_xattr_use_omap = true
---
According to my understanding, ceph-mon's default port is 6789.
Why does it run on 6800 instead of 6789?
Restarting ceph-mon has a same result.
Sorry for my poor english. I don't write or speak english fluently.
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Joao Eduardo Luis
Software Engineer | http://inktank.com | http://ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com