Re: Rename Ceph cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I think it's pretty clear:


"For example, when you run multiple clusters in a federated architecture, the cluster name (e.g., us-west, us-east) identifies the cluster for the current CLI session. Note: To identify the cluster name on the command line interface, specify the a Ceph configuration file with the cluster name (e.g., ceph.conf, us-west.conf, us-east.conf, etc.). Also see CLI usage (ceph --cluster {cluster-name})."

But it could be tricky on the OSDs that are running, depending on the distribution initscripts - you could find out that you can't "service ceph stop osd..." anymore after the change, since it can't find it's pidfile anymore. Looking at Centos initscript it looks like it accepts "-c conffile" argument though.
(So you should be managins OSDs with "-c ceph-prod.conf" now?)

Jan


On 18 Aug 2015, at 14:13, Erik McCormick <emccormick@xxxxxxxxxxxxxxx> wrote:

I've got a custom named cluster integrated with Openstack (Juno) and didn't run into any hard-coded name issues that I can recall. Where are you seeing that?

As to the name change itself, I think it's really just a label applying to a configuration set. The name doesn't actually appear *in* the configuration files. It stands to reason you should be able to rename the configuration files on the client side and leave the cluster alone. It'd be with trying in a test environment anyway.

-Erik

On Aug 18, 2015 7:59 AM, "Jan Schermer" <jan@xxxxxxxxxxx> wrote:
This should be simple enough

mv /etc/ceph/ceph-prod.conf /etc/ceph/ceph.conf

No? :-)

Or you could set this in nova.conf:
images_rbd_ceph_conf=/etc/ceph/ceph-prod.conf

Obviously since different parts of openstack have their own configs, you'd have to do something similiar for cinder/glance... so not worth the hassle.

Jan

> On 18 Aug 2015, at 13:50, Vasiliy Angapov <angapov@xxxxxxxxx> wrote:
>
> Hi,
>
> Does anyone know what steps should be taken to rename a Ceph cluster?
> Btw, is it ever possbile without data loss?
>
> Background: I have a cluster named "ceph-prod" integrated with
> OpenStack, however I found out that the default cluster name "ceph" is
> very much hardcoded into OpenStack so I decided to change it to the
> default value.
>
> Regards, Vasily.
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux