I kind of doubt this will provide much of an advantage, I think recovery is the only time you might have some chance of speedup, but i'm not sure network throughput is always the bottleneck. There was some discussion a while back about this, client IO is still going to be impacted by recovery.
On Wed, Sep 25, 2019 at 6:36 AM Adrien Georget <adrien.georget@xxxxxxxxxxx> wrote:
Hi,
I need your advice about the following setup.
Currently, we have a Ceph nautilus cluster used by Openstack Cinder with
single NIC in 10Gbps on OSD hosts.
We will upgrade the cluster by adding 7 new hosts dedicated to
Nova/Glance and we would like to add a cluster network to isolate
replication and recovery traffic.
For now, it's not possible to add a second NIC and FC so we are thinking
about enabling DELL NPAR [1] which allows splitting a single physical
NIC in 2 logical NICs (1 for public network and 1 for Cluster network).
We can set max and min bandwidth and implement dynamic bandwidth
balancing for NPAR to get the appropriate bandwidth when Ceph need it
(default alloc is 66% for cluster network and 34% for public network).
Any experiences with this kind of configuration? Do you see any
disadvantages doing this?
And one question, if we put this in production, adding cluster network
value in ceph.conf and restarting each OSD is enough for Ceph?
Best,
Adrien
[1]
https://www.dell.com/support/article/fr/fr/frbsdt1/how12596/how-npar-works?lang=en
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx