Hi,
We've setup ceph and openstack on a fairly peculiar network
configuration (or at least I think it is) and I'm looking for
information on how to make it work properly.
Basically, we have 3 networks, a management network, a storage network
and a cluster network. The management network is over a 1 gbps link,
while the storage network is over 2 bonded 10 gbps links. The cluster
network can be ignored for now, as it works well.
Now, the main problem is that ceph osd nodes are plugged on the
management, storage and cluster networks, but the monitors are only
plugged on the management network. When I do tests, I see that all the
traffic ends up going through the management network, slowing down
ceph's performances. Because of the current network setup, I can't hook
up the monitoring nodes on the storage network, as we're missing ports
on the switch.
Would it be possible to maintain access to the management nodes while
forcing the ceph cluster to use the storage network for data transfer?
As a reference, here's my ceph.conf.
[global]
osd_pool_default_pgp_num = 800
osd_pg_bits = 12
auth_service_required = cephx
osd_pool_default_size = 3
filestore_xattr_use_omap = true
auth_client_required = cephx
osd_pool_default_pg_num = 800
auth_cluster_required = cephx
mon_host = 10.251.0.51
public_network = 10.251.0.0/24, 10.21.0.0/24
mon_initial_members = cephmon1
cluster_network = 192.168.31.0/24
fsid = 60e1b557-e081-4dab-aa76-e68ba38a159e
osd_pgp_bits = 12
As you can see I've setup 2 public networks, 10.251.0.0 being the
management network and 10.21.0.0 being the storage network. Would it be
possible to maintain cluster functionality and remove 10.251.0.0/24 from
the public_network list? For example, if I were to remove it from the
public network list and referenced each monitor node IP in the config
file, would I be able to maintain connectivity?
--
======================
Jean-Philippe Méthot
Administrateur système / System administrator
GloboTech Communications
Phone: 1-514-907-0050
Toll Free: 1-(888)-GTCOMM1
Fax: 1-(514)-907-0750
jpmethot@xxxxxxxxxx
http://www.gtcomm.net
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com