I did something like this the other day
on a test cluster... can't guarantee the same results, but it
worked for me. I don't see an official procedure documented
anywhere. I didn't have mds or radosgw. (I also renamed the
cluster at the same time... I omitted those steps)
assuming services are stopped, and assuming your cluster is named "ceph" (the default): things to change: /etc/ceph/ceph.conf (deploy everywhere) change hostnames here rename dirs (repeat on each mon) /var/lib/ceph/mon/ceph-oldhostname -> /var/lib/ceph/mon/ceph-newhostname also check mds, etc. monmap (mon nodes) # first section on just one mon # here newhostname matches the dir name /var/lib/ceph/mon/ceph-newhostname ceph-mon --cluster ceph -i oldhostname --extract-monmap /tmp/monmap monmaptool --print /tmp/monmap # repeat for each monmaptool --rm oldname1 /tmp/monmap # repeat for each monmaptool --add newname1 ipgoeshere:6789 /tmp/monmap monmaptool --print /tmp/monmap # last section on the other mons (using the file produced on the first) # repeat on each monitor node ceph-mon --cluster newname -i newhostname --inject-monmap /tmp/monmap In theory something should be done about renaming the auth keys ... ceph auth ...... but I didn't do that, and don't see any auth keys..mine has some bootstrap ones. I don't know if that's standard or not. If you had to do that, maybe copying them first, then removing after is best. Or run the cluster with cephx disabled temporarily to fix it. Then start mons only This is if you renamed some osd host names, and requires running mons: # output to compare to later ceph osd tree # I didn't do this step...but I think this ought to be right ceph osd crush rename-bucket oldname newname #verify it looks right now ceph osd tree # if it looks wrong, like let's say now you have extra hosts leftover (which might happen if you start osds before renaming)... use rename-bucket or rm # ceph osd crush rm "oldname" Then start osds. If you have mds servers you renamed, there's auth for that...rename those clients probably. That means the /var/lib/ceph/mds/... dirs, and maybe the client name inside the keyring there. I don't know this step. And no idea about radosgw. Test on a test cluster first. And I have no idea if it will result in data movement. You may want to prepare by `ceph osd set norecover`, and set maxbackfills, etc. too. On 12/02/16 12:08, Andrei Mikhailovsky wrote:
-- -------------------------------------------- Peter Maloney Brockmann Consult Max-Planck-Str. 2 21502 Geesthacht Germany Tel: +49 4152 889 300 Fax: +49 4152 889 333 E-mail: peter.maloney@xxxxxxxxxxxxxxxxxxxx Internet: http://www.brockmann-consult.de -------------------------------------------- |
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com