Re: Migrating to a dedicated cluster network

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Jakub Jaszewski wrote:
: Hi Yenya,
: 
: Can I ask how your cluster looks and  why you want to do the network
: splitting?

	Jakub,

we have deployed the Ceph cluster originally as a proof of concept for
a private cloud. We run OpenNebula and Ceph on about 30 old servers
with old HDDs (2 OSDs per host), all connected via 1 Gbit ethernet
with 10Gbit backbone. Since then our private cloud got pretty popular
among our users, so we are planning to upgrade it to a smaller amount
of modern servers. The new servers have two 10GbE interfaces, so the primary
reasoning behind it is "why not use them both when we already have them".
Of course, interface teaming/bonding is another option.

Currently I see the network being saturated only when doing a live
migration of a VM between the physical hosts, and then during a Ceph
cluster rebalance.

So, I don't think moving to a dedicated cluster network is a necessity for us.

Anyway, does anybody use the cluster network with larger MTU (jumbo frames)?

: We used to set up 9-12 OSD nodes (12-16 HDDs each) clusters using 2x10Gb
: for access and 2x10Gb for cluster network, however, I don't see the reasons
: to not use just one network for next cluster setup.


-Yenya

: śr., 23 sty 2019, 10:40: Jan Kasprzak <kas@xxxxxxxxxx> napisał(a):
: 
: >         Hello, Ceph users,
: >
: > is it possible to migrate already deployed Ceph cluster, which uses
: > public network only, to a split public/dedicated networks? If so,
: > can this be done without service disruption? I have now got a new
: > hardware which makes this possible, but I am not sure how to do it.
: >
: >         Another question is whether the cluster network can be done
: > solely on top of IPv6 link-local addresses without any public address
: > prefix.
: >
: >         When deploying this cluster (Ceph Firefly, IIRC), I had problems
: > with mixed IPv4/IPv6 addressing, and ended up with ms_bind_ipv6 = false
: > in my Ceph conf.
: >
: >         Thanks,
: >
: > -Yenya
: >
: > --
: > | Jan "Yenya" Kasprzak <kas at {fi.muni.cz - work | yenya.net - private}>
: > |
: > | http://www.fi.muni.cz/~kas/                         GPG: 4096R/A45477D5
: > |
: >  This is the world we live in: the way to deal with computers is to google
: >  the symptoms, and hope that you don't have to watch a video. --P. Zaitcev
: > _______________________________________________
: > ceph-users mailing list
: > ceph-users@xxxxxxxxxxxxxx
: > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
: >

-- 
| Jan "Yenya" Kasprzak <kas at {fi.muni.cz - work | yenya.net - private}> |
| http://www.fi.muni.cz/~kas/                         GPG: 4096R/A45477D5 |
 This is the world we live in: the way to deal with computers is to google
 the symptoms, and hope that you don't have to watch a video. --P. Zaitcev
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux