Hi,
We have ceph cluster in version luminous 12.2.2. It has public network and cluster network configured.
Cluster provides services for two big groups of clients and some individual clients
One group uses RGW and another uses RBD.
Ceph's public network and two mentioned groups are located in three different vlans. Each clients group generates traffic above limit of routing devices.
Right now RGW and MON roles are served by the same hosts.
I'd like to add additional vlan tagged interface to all MON and OSD ceph nodes to streamline communication with big group of clients using RBD and keep current public network for individual requests.
From what I can find it is supported to have more than one public network, according to http://docs.ceph.com/docs/master/rados/configuration/network-config-ref/#id1
Is it possible to have MON host with two pulblic addresses assigned ? or I need to desginate another hosts to handel MON roles with different public IP addresses?
How should I approach with RGW service? In this case I also need to provide RGW for big group of clients in dedicated vlan and keep access for individual requests coming to IP in currently set public network.
Is it possible to bind one civetweb instance to two ip addresses or need separete instances per network address ?
Current ceph.conf is
[global]
fsid = 1023c49f-3a10-42de-9f62-9b122db32f1f
mon_initial_members = host01,host02,host03
mon_host = 10.212.32.18,10.212.32.19,10.212.32.20
auth_supported = cephx
public_network = 10.212.32.0/24
cluster_network = 10.212.14.0/24
[client.rgw.host01]
rgw host = host01
rgw enable usage log = true
# debug_rgw = 20
[client.rgw.host02]
rgw host = host02
rgw enable usage log = true
[client.rgw.host03]
rgw host = host03
rgw enable usage log = true
[osd]
filestore xattr use omap = true
osd journal size = 10240
osd mount options xfs = noatime,inode64,logbsize=256k,logbufs=8
osd crush location hook = /usr/bin/opera-ceph-crush-location.sh
osd pool default size = 3
[mon]
mon compact on start = true
mon compact on trim = true
Thanks
Jakub
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com