Re: Migrating to a dedicated cluster network

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Split networks is rarely worth it. One fast network is usually better.
And since you mentioned having only two interfaces: one bond is way
better than two independent interfaces.

IPv4/6 dual stack setups will be supported in Nautilus, you currently
have to use either IPv4 or IPv6.

Jumbo frames: often mentioned but usually not worth it.
(Yes, I know that this is somewhat controversial and increasing MTU is
often a standard trick for performance tuning, but I still have to see
have a benchmark that actually shows a significant performance
improvements. Some quick tests show that I can save around 5-10% CPU
load on a system doing ~50 gbit/s of IO traffic which is almost
nothing given the total system load)



Paul

-- 
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90

On Wed, Jan 23, 2019 at 11:41 AM Jan Kasprzak <kas@xxxxxxxxxx> wrote:
>
> Jakub Jaszewski wrote:
> : Hi Yenya,
> :
> : Can I ask how your cluster looks and  why you want to do the network
> : splitting?
>
>         Jakub,
>
> we have deployed the Ceph cluster originally as a proof of concept for
> a private cloud. We run OpenNebula and Ceph on about 30 old servers
> with old HDDs (2 OSDs per host), all connected via 1 Gbit ethernet
> with 10Gbit backbone. Since then our private cloud got pretty popular
> among our users, so we are planning to upgrade it to a smaller amount
> of modern servers. The new servers have two 10GbE interfaces, so the primary
> reasoning behind it is "why not use them both when we already have them".
> Of course, interface teaming/bonding is another option.
>
> Currently I see the network being saturated only when doing a live
> migration of a VM between the physical hosts, and then during a Ceph
> cluster rebalance.
>
> So, I don't think moving to a dedicated cluster network is a necessity for us.
>
> Anyway, does anybody use the cluster network with larger MTU (jumbo frames)?
>
> : We used to set up 9-12 OSD nodes (12-16 HDDs each) clusters using 2x10Gb
> : for access and 2x10Gb for cluster network, however, I don't see the reasons
> : to not use just one network for next cluster setup.
>
>
> -Yenya
>
> : śr., 23 sty 2019, 10:40: Jan Kasprzak <kas@xxxxxxxxxx> napisał(a):
> :
> : >         Hello, Ceph users,
> : >
> : > is it possible to migrate already deployed Ceph cluster, which uses
> : > public network only, to a split public/dedicated networks? If so,
> : > can this be done without service disruption? I have now got a new
> : > hardware which makes this possible, but I am not sure how to do it.
> : >
> : >         Another question is whether the cluster network can be done
> : > solely on top of IPv6 link-local addresses without any public address
> : > prefix.
> : >
> : >         When deploying this cluster (Ceph Firefly, IIRC), I had problems
> : > with mixed IPv4/IPv6 addressing, and ended up with ms_bind_ipv6 = false
> : > in my Ceph conf.
> : >
> : >         Thanks,
> : >
> : > -Yenya
> : >
> : > --
> : > | Jan "Yenya" Kasprzak <kas at {fi.muni.cz - work | yenya.net - private}>
> : > |
> : > | http://www.fi.muni.cz/~kas/                         GPG: 4096R/A45477D5
> : > |
> : >  This is the world we live in: the way to deal with computers is to google
> : >  the symptoms, and hope that you don't have to watch a video. --P. Zaitcev
> : > _______________________________________________
> : > ceph-users mailing list
> : > ceph-users@xxxxxxxxxxxxxx
> : > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> : >
>
> --
> | Jan "Yenya" Kasprzak <kas at {fi.muni.cz - work | yenya.net - private}> |
> | http://www.fi.muni.cz/~kas/                         GPG: 4096R/A45477D5 |
>  This is the world we live in: the way to deal with computers is to google
>  the symptoms, and hope that you don't have to watch a video. --P. Zaitcev
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux