Ceph NIC partitioning (NPAR)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I need your advice about the following setup.
Currently, we have a Ceph nautilus cluster used by Openstack Cinder with single NIC in 10Gbps on OSD hosts. We will upgrade the cluster by adding 7 new hosts dedicated to Nova/Glance and we would like to add a cluster network to isolate replication and recovery traffic. For now, it's not possible to add a second NIC and FC so we are thinking about enabling DELL NPAR [1] which allows splitting a single physical NIC in 2 logical NICs (1 for public network and 1 for Cluster network). We can set max and min bandwidth and implement dynamic bandwidth balancing for NPAR to get the appropriate bandwidth when Ceph need it (default alloc is 66% for cluster network and 34% for public network). Any experiences with this kind of configuration? Do you see any disadvantages doing this?

And one question, if we put this in production, adding cluster network value in ceph.conf and restarting each OSD is enough for Ceph?

Best,

Adrien

[1] https://www.dell.com/support/article/fr/fr/frbsdt1/how12596/how-npar-works?lang=en
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux