Re: Single ceph client usage with multiple ceph cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



At the risk of pedantry, I’d like to make a distinction, because this has tripped people up in the past.

Cluster names and config file names are two different things.  It’s easy to conflate them, which has caused some people a lot of technical debt and grief.  Especially with `rbd-mirror`.  

Custom cluster names, ie any cluster that isn’t named “ceph” have been deprecated for a while, with upstream clearly indicating that code / support would be incrementally factored out.  They were originally intended for running more than one logical cluster on a given set of hardware, which it turns out very few people ever did.  They were also IMHO never 100% completely implemented.  Deploying a new cluster today with a custom name would be a bad idea.

My read of the OP’s post was multiple clusters, with no indication of their names.  Ideally all will be named “ceph”.  Reasons for multiple clusters include infrastructure generational shifts, geographical diversity, minimization of blast radius, and an operational desire to only grow a given cluster to a certain size.

So I fully expect `—cluster` to be removed from commands over time, and would not advocate implementing any infrastructure using it, either directly or via `CEPH_ARGS`.

`-c` and `-k` to point to conf and key files would be better choices.

Note that conf file naming is (mostly) arbitrary, and is not tied to the actual *cluster* name.  One might have for example:

/etc/ceph-cluster1/ceph.conf
/etc/ceph-clusster2.conf
/etc/ceph/cluster-1.conf
/etc/ceph/cluster-2.conf
/var/lib/tool/ethelmerman.conf

All would work, though for historical reasons I like to avoid having more than one .conf file under /etc/ceph.

> 
> Hello Mosharaf,
> 
> yes, that's no problem. On all of my clusters I did not have a ceph.conf
> in in the /etc/ceph folders on my nodes at all.
> 
> I have a <name_of_cluster_1>.conf, <name_of_cluster_2>.conf, <name_of_cluster_3>.conf ...
> configuration file in the /etc/ceph folder. One config file for each cluster.
> The same for the different key files e.g. <name_of_cluster_1>.mon.keyring, <name_of_cluster_2>.mon.keyring
> Or the admin key <name_of_cluster_1>.client.admin.keyring, <name_of_cluster_2>.mon.keyring
> 
> But keep in mind, if you go this way you have to provide the
> name of the cluster (points to the configuration file you are refering to)
> for nearly every ceph command together with the cluster keyword
> e.g: ceph --cluster <name_of_cluster_1> health detail
> or: rbd --cluster <name_of_cluster_2> ls <storage_pool>
> 
> 
> Regards
> Markus Baier
> -- 
> Markus Baier
> Systemadministrator
> Fachgebiet Self-Organizing Systems
> TU Darmstadt, Germany
> S3|19 1.7
> Rundeturmstrasse 12
> 64283 Darmstadt
> 
> Phone: +49 6151 16-57242
> Fax: +49 6151 16-57241
> E-Mail: Markus.Baier@xxxxxxxxxxxxxxxxxxx
> 
> Am 09.12.21 um 03:35 schrieb Mosharaf Hossain:
>> Hello Markus
>> Thank you for your direction.
>> I would like to let you know that the way you show it is quite meaningful but I am afraid how the ceph system would identify the configuration file as by default it uses ceph. conf in /etc/ceph folder. Can we define the config file as we want?
>> 
>> It will be helpful to give or show us guidelines to add the ceph client to multiple clusters.
>> 
>> 
>> 
>> 
>> Regards
>> Mosharaf Hossain
>> Deputy Manager, Product Development
>> IT Division
>> *
>> *
>> Bangladesh Export Import Company Ltd.
>> 
>> Level-8, SAM Tower, Plot #4, Road #22, Gulshan-1, Dhaka-1212,Bangladesh
>> 
>> Tel: +880 9609 000 999, +880 2 5881 5559, Ext: 14191, Fax: +880 2 9895757
>> 
>> Cell: +8801787680828, Email: mosharaf.hossain@xxxxxxxxxxxxxx <mailto:mosharaf.hossain@xxxxxxxxxxxxxx>, Web: www.bol-online.com
>> <https://www.google.com/url?q=http://www.bol-online.com&sa=D&source=hangouts&ust=1557908951423000&usg=AFQjCNGMxIuHSHsD3qO6y5JddpEZ0S592A>
>> 
>> 
>> 
>> On Tue, Nov 2, 2021 at 7:01 PM Markus Baier <Markus.Baier@xxxxxxxxxxxxxxxxxxx> wrote:
>> 
>>    Hello,
>> 
>>    yes you can use a single server to operate multiple clusters.
>>    I have a configuration running, with two independent ceph clusters
>>    running on the same node (of course multiple nodes for the two
>>    clusters)
>> 
>>    The trick is to work with multiple ceph.conf files, I use two
>>    seperate ceph.conf files under /etc/ceph/
>>    One is called <cluster_a>.conf and the other <cluster_b>.conf
>> 
>>    Every cluster uses it's seperate network interfaces, so I use four
>>    10GbE
>>    Interfaces
>>    for the two clusters, but you can also use vlans togehter with a
>>    100GbE
>>    Interface or a 100GbE NIC that can provide virtual ports for the
>>    separation
>>    of the networks and the distribution of the network load.
>> 
>>    Every cluster uses also a seperate keyring, so e.g for the first
>>    cluster you have a keyring named <cluster_a>.mon.keyring
>>    and for the second one <cluster_b>.mon.keyring
>>    inside of the /etc/ceph folder.
>> 
>>    To administrate the whole thing, ceph provide
>>    the --cluster parameter for the command line programs.
>>    So ceph --cluster <cluster_a> -s
>>    will show the outputs for cluster one and
>>    ceph --cluster <cluster_b> -s
>>    for the cluster two
>> 
>> 
>>    Regards
>>    Markus Baier
>> 
>>    Am 02.11.21 um 13:30 schrieb Mosharaf Hossain:
>>    > Hi Users
>>    > We have two ceph clusters in our lab. We are experimenting to
>>    use a single
>>    > server as a client for two ceph clusters. Can we use the same
>>    client server
>>    > to store keyring for different clusters in ceph.conf file.
>>    >
>>    >
>>    >
>>    >
>>    > Regards
>>    > Mosharaf Hossain
>>    > _______________________________________________
>>    > ceph-users mailing list -- ceph-users@xxxxxxx
>>    > To unsubscribe send an email to ceph-users-leave@xxxxxxx
>> 
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux