Re: Cluster network and public network

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Anthony,

Thanks for the feedback!
The servers are using two bond interfaces for two networks.
And each interface is bonded with two 25Gb/s cards(active-backup mode).

You're right, I should have done the test in a havey recovery or backfill
situation. I will benchmark the cluster once again in order to get the
accurate
statistics.

Further more, the bandwidth data are obtained from 'iftop'. The speed is
much
higer on public interface, than that on cluster interface.

Thanks

Anthony D'Atri <anthony.datri@xxxxxxxxx> 于2020年5月9日周六 下午4:32写道:

>
> > Hi,
> >
> > I deployed few clusters with two networks as well as only one network.
> > There has little impact between them for my experience.
> >
> > I did a performance test on nautilus cluster with two networks last week.
> > What I found is that the cluster network has low bandwidth usage
>
> During steady-state, sure.  Heartbeats go over that, as do replication ops
> when clients write data.
>
> During heavy recovery or backfill, including healing from failures,
> balancing, adding/removing drives, much more will be used.
>
> Convention wisdom has been to not let that traffic DoS clients, or clients
> to DoS heartbeats.
>
> But this I think dates to a time when 1Gb/s networks were common.  If
> one’s using modern multiple/bonded 25Gb/s or 40Gb/s links ….
>
> > while public network bandwidth is nearly full.
>
> If your public network is saturated, that actually is a problem, last
> thing you want is to add recovery traffic, or to slow down heartbeats.  For
> most people, it isn’t saturated.
>
> How do you define “full” ?  TOR uplinks?  TORs to individual nodes?
> Switch backplanes?  Are you using bonding with the wrong hash policy?
>
> > As a result, I don't think the cluster network is necessary.
>
> For an increasing percentage of folks deploying production-quality
> clusters, agreed.
>
> >
> >
> > Willi Schiegel <willi.schiegel@xxxxxxxxxxxxxx> 于2020年5月8日周五 下午6:14写道:
> >
> >> Hello Nghia,
> >>
> >> I once asked a similar question about network architecture and got the
> >> same answer as Martin wrote from Wido den Hollander:
> >>
> >> There is no need to have a public and cluster network with Ceph. Working
> >> as a Ceph consultant I've deployed multi-PB Ceph clusters with a single
> >> public network without any problems. Each node has a single IP-address,
> >> nothing more, nothing less.
> >>
> >> In the current Ceph manual you can read
> >>
> >> It is possible to run a Ceph Storage Cluster with two networks: a public
> >> (front-side) network and a cluster (back-side) network. However, this
> >> approach complicates network configuration (both hardware and software)
> >> and does not usually have a significant impact on overall performance.
> >> For this reason, we generally recommend that dual-NIC systems either be
> >> configured with two IPs on the same network, or bonded.
> >>
> >> I followed the advice from Wido "One system, one IP address" and
> >> everything works fine. So, you should be fine with one interface for
> >> MONs, MGRs, and OSDs.
> >>
> >> Best
> >> Willi
> >>
> >> On 5/8/20 11:57 AM, Nghia Viet Tran wrote:
> >>> Hi Martin,
> >>>
> >>> Thanks for your response. You mean one network interface for only MON
> >>> hosts or for the whole cluster including OSD hosts? I’m confusing now
> >>> because there are some projects that only useone public network for the
> >>> whole cluster. That means the rebalancing, replicating objects and
> >>> heartbeats from OSD hostswould affects the performance of Ceph client.
> >>>
> >>> *From: *Martin Verges <martin.verges@xxxxxxxx>
> >>> *Date: *Friday, May 8, 2020 at 16:20
> >>> *To: *Nghia Viet Tran <Nghia.Viet.Tran@xxxxxxxxxx>
> >>> *Cc: *"ceph-users@xxxxxxx" <ceph-users@xxxxxxx>
> >>> *Subject: *Re:  Cluster network and public network
> >>>
> >>> Hello Nghia,
> >>>
> >>> just use one network interface card and use frontend and backend
> traffic
> >>> on the same. No problem with that.
> >>>
> >>> If you have a dual port card, use both ports as an LACP channel and
> >>> maybe separate it using VLANs if you want to, but not required as well.
> >>>
> >>>
> >>> --
> >>>
> >>> Martin Verges
> >>> Managing director
> >>>
> >>> Mobile: +49 174 9335695
> >>> E-Mail: martin.verges@xxxxxxxx <mailto:martin.verges@xxxxxxxx>
> >>> Chat: https://t.me/MartinVerges
> >>>
> >>> croit GmbH, Freseniusstr. 31h, 81247 Munich
> >>> CEO: Martin Verges - VAT-ID: DE310638492
> >>> Com. register: Amtsgericht Munich HRB 231263
> >>>
> >>> Web: https://croit.io
> >>> YouTube: https://goo.gl/PGE1Bx
> >>>
> >>> Am Fr., 8. Mai 2020 um 09:29 Uhr schrieb Nghia Viet Tran
> >>> <Nghia.Viet.Tran@xxxxxxxxxx <mailto:Nghia.Viet.Tran@xxxxxxxxxx>>:
> >>>
> >>>    Hi everyone,
> >>>
> >>>    I have a question about the network setup. From the document, It’s
> >>>    recommended to have 2 NICs per hosts as described in below picture
> >>>
> >>>    Diagram
> >>>
> >>>    In the picture, OSD hosts will connect to the Cluster network for
> >>>    replicate and heartbeat between OSDs, therefore, we definitely need
> >>>    2 NICs for it. But seems there are no connections between Ceph MON
> >>>    and Cluster network. Can we install 1 NIC on Ceph MON then?
> >>>
> >>>    I appreciated any comments!
> >>>
> >>>    Thank you!
> >>>
> >>>    --
> >>>
> >>>    Nghia Viet Tran (Mr)
> >>>
> >>>    _______________________________________________
> >>>    ceph-users mailing list -- ceph-users@xxxxxxx
> >>>    <mailto:ceph-users@xxxxxxx>
> >>>    To unsubscribe send an email to ceph-users-leave@xxxxxxx
> >>>    <mailto:ceph-users-leave@xxxxxxx>
> >>>
> >>>
> >>> _______________________________________________
> >>> ceph-users mailing list -- ceph-users@xxxxxxx
> >>> To unsubscribe send an email to ceph-users-leave@xxxxxxx
> >>>
> >> _______________________________________________
> >> ceph-users mailing list -- ceph-users@xxxxxxx
> >> To unsubscribe send an email to ceph-users-leave@xxxxxxx
> >>
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@xxxxxxx
> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux