Re: НА: Rename Ceph cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks to all!

Everything worked like a charm:
1) Stopped the cluster (I guess it's faster than moving OSDs one by one)
2) Unmounted OSDs and fixed fstab entries for them
3) Renamed the MON and OSD folders
4) Renamed config file and keyrings, fixed paths to keyrings in config
5) Mounted OSDs back (mount -a)
6) Started everything
7) Fixed path to config in nova.conf, cinder.conf and glance-api.conf
all over the OpenStack

Everything works as expected. It took about half an hour to do all the job.
Again, thanks to all!

Regards, Vasily.

2015-08-19 15:10 GMT+08:00 Межов Игорь Александрович <megov@xxxxxxxxxx>:
> Hi!
>
> I think, that renaming cluster - is not only mv config file. We try to
> change name of test Hammer
> cluster, created with ceph-deploy and got some issues.
>
> In default install, naming of many parts are derived from cluster name. For
> example, cephx keys are
> stored not in "ceph.client.admin.keyring", but
> "$CLUSTERNAME.client.admin.keyring", so we
> have to rename keyrings also.
>
> The same thing is for OSD/MON mounting points: instead
> /var/lib/ceph/osd/ceph-$OSDNUM,
> after renaming cluster, daemons try to run OSD from
> /var/lib/ceph/osd/$CLUSTERNAME-$OSDNUM.
> Of course, there are no such mountpoints and we manually create them, mount
> fs and re-run OSDs.
>
> There is one unresolved issue with udev rules: after node reboot,
> filesystems are mounted by udev
> into the old mountpoints. As the cluster is for testing - this is not a big
> thing.
>
> So, be carefull while renaming production or loaded cluster.
>
> PS: All above is my IMHO and I may be wrong. ;)
>
> Megov Igor
> CIO, Yuterra
>
>
> ________________________________
> От: ceph-users <ceph-users-bounces@xxxxxxxxxxxxxx> от имени Jan Schermer
> <jan@xxxxxxxxxxx>
> Отправлено: 18 августа 2015 г. 15:18
> Кому: Erik McCormick
> Копия: ceph-users@xxxxxxxxxxxxxx
> Тема: Re:  Rename Ceph cluster
>
> I think it's pretty clear:
>
> http://ceph.com/docs/master/install/manual-deployment/
>
> "For example, when you run multiple clusters in a federated architecture,
> the cluster name (e.g., us-west, us-east) identifies the cluster for the
> current CLI session. Note: To identify the cluster name on the command line
> interface, specify the a Ceph configuration file with the cluster name
> (e.g., ceph.conf, us-west.conf, us-east.conf, etc.). Also see CLI usage
> (ceph --cluster {cluster-name})."
>
> But it could be tricky on the OSDs that are running, depending on the
> distribution initscripts - you could find out that you can't "service ceph
> stop osd..." anymore after the change, since it can't find it's pidfile
> anymore. Looking at Centos initscript it looks like it accepts "-c conffile"
> argument though.
> (So you should be managins OSDs with "-c ceph-prod.conf" now?)
>
> Jan
>
>
> On 18 Aug 2015, at 14:13, Erik McCormick <emccormick@xxxxxxxxxxxxxxx> wrote:
>
> I've got a custom named cluster integrated with Openstack (Juno) and didn't
> run into any hard-coded name issues that I can recall. Where are you seeing
> that?
>
> As to the name change itself, I think it's really just a label applying to a
> configuration set. The name doesn't actually appear *in* the configuration
> files. It stands to reason you should be able to rename the configuration
> files on the client side and leave the cluster alone. It'd be with trying in
> a test environment anyway.
>
> -Erik
>
> On Aug 18, 2015 7:59 AM, "Jan Schermer" <jan@xxxxxxxxxxx> wrote:
>>
>> This should be simple enough
>>
>> mv /etc/ceph/ceph-prod.conf /etc/ceph/ceph.conf
>>
>> No? :-)
>>
>> Or you could set this in nova.conf:
>> images_rbd_ceph_conf=/etc/ceph/ceph-prod.conf
>>
>> Obviously since different parts of openstack have their own configs, you'd
>> have to do something similiar for cinder/glance... so not worth the hassle.
>>
>> Jan
>>
>> > On 18 Aug 2015, at 13:50, Vasiliy Angapov <angapov@xxxxxxxxx> wrote:
>> >
>> > Hi,
>> >
>> > Does anyone know what steps should be taken to rename a Ceph cluster?
>> > Btw, is it ever possbile without data loss?
>> >
>> > Background: I have a cluster named "ceph-prod" integrated with
>> > OpenStack, however I found out that the default cluster name "ceph" is
>> > very much hardcoded into OpenStack so I decided to change it to the
>> > default value.
>> >
>> > Regards, Vasily.
>> > _______________________________________________
>> > ceph-users mailing list
>> > ceph-users@xxxxxxxxxxxxxx
>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux