Re: Adding new mon to existing cluster in ceph v0.39(+?)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Now i don't remember message but it was about a public-addr of mon in
conf, and problem with no new mon in monmap. My mon section looks like
this:

        mon data = /vol0/data/mon.$id

        ; some minimal logging (just message traffic) to aid debugging

        debug ms = 1     ; see message traffic
        debug mon = 0   ; monitor
        debug paxos = 0 ; monitor replication
        debug auth = 0  ;

        mon allowed clock drift = 2

[mon.0]
        host = s3-10-177-64-4
        mon addr = 10.177.64.4:6789

[mon.1]
        host = s3-10-177-64-6
        mon addr = 10.177.64.6:6789

[mon.2]
        host = s3-10-177-64-8
        mon addr = 10.177.64.8:6789

But as i can see there is a updated doc, about adding new mon.

2012/1/10 Samuel Just <sam.just@xxxxxxxxxxxxx>:
> It looks like in step one you needed to supply a either monmap or
> addresses of existing monitors.  What errors did you encounter?
> -Sam
>
> 2012/1/10 Sławomir Skowron <slawomir.skowron@xxxxxxxxx>:
>> I have some problem with adding a new mon to existing ceph cluster.
>>
>> Now cluster contains a 3 mon's, but i started with only one in one
>> machine. Then adding a second, and third machine, with new mon's, and
>> OSD. Adding, a new OSD is quiet simple, but adding, a new mon is
>> compilation of some pieces in old doc of ceph, new doc, and a group
>> mails.
>>
>> This -> http://ceph.newdream.net/docs/latest/ops/manage/grow/mon/ -
>> not working properly in section (Adding a monitor)
>>
>> Maybe this will be useful for someone:
>>
>> 1. Create a new mon structure with existing one working mon instance,
>> maybe created with mkcepfs in init of cluster.
>>
>>  a) edit ceph.conf, and add new mon definition in mon part of conf in
>> whole cluster.
>>
>>  b) ceph auth get mon. -o /tmp/monkey
>>
>>  c) fsid=`ceph fsid --concise`
>>
>>  d) ceph-mon -i <new-mon-id> --mkfs -k /tmp/monkey --fsid $fsid
>>
>> 2. Before you start new mon (check if new mon is not working - in my
>> case it's not starting even if i try :)), some things musts be done
>> before.
>> It's based on http://ceph.newdream.net/docs/latest/ops/manage/grow/mon/
>> (Removing a monitor from an unhealthy or down cluster)
>>
>>  a) On a surviving monitor node, find the most recent monmap in mon
>> dir, like in doc about removing monitor.
>>
>>  b) On a surviving monitor node:
>>
>>   $ cp $mon_data/monmap/<latest-monmap-id> /tmp/foo
>>   $ monmaptool /tmp/foo --add <new-mon-id> <new-mon-ip>:<new-mon-port>
>>
>>  c) Inject a new monmap to working ceph mon.
>>
>>     ceph-mon -i <surviving-mon-id> --inject-monmap /tmp/foo
>>
>>     ceph -s will show new number of mons.
>>
>>  d) copy /tmp/foo, and inject this monmap, to every mon, that works
>> in existing cluster, even on machine with new mon to update, a monmap
>> in new mon directory.
>>
>>  e) Start new mon:
>>
>>   service ceph start mon
>>
>> then mon_status will show a new list of mon in ceph cluster.
>>
>>   ceph mon_status
>>
>> Now new mon works perfect.
>>
>>
>> Maybe it's not a supported way to insert a new mon to cluster, but for
>> me now it's only way that works :)
>>
>> --
>> -----
>> Pozdrawiam
>>
>> Sławek "sZiBis" Skowron
>> --
>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>> the body of a message to majordomo@xxxxxxxxxxxxxxx
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html



-- 
-----
Pozdrawiam

Sławek "sZiBis" Skowron
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux