Re: Cluster network and public network

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 5/8/20 12:13 PM, Willi Schiegel wrote:
> Hello Nghia,
> 
> I once asked a similar question about network architecture and got the
> same answer as Martin wrote from Wido den Hollander:
> 
> There is no need to have a public and cluster network with Ceph. Working
> as a Ceph consultant I've deployed multi-PB Ceph clusters with a single
> public network without any problems. Each node has a single IP-address,
> nothing more, nothing less.
> 
> In the current Ceph manual you can read
> 
> It is possible to run a Ceph Storage Cluster with two networks: a public
> (front-side) network and a cluster (back-side) network. However, this
> approach complicates network configuration (both hardware and software)
> and does not usually have a significant impact on overall performance.
> For this reason, we generally recommend that dual-NIC systems either be
> configured with two IPs on the same network, or bonded.
> 
> I followed the advice from Wido "One system, one IP address" and
> everything works fine. So, you should be fine with one interface for
> MONs, MGRs, and OSDs.
> 

Great to hear! I'm still behind this idea and all the clusters I design
have a single (or LACP) network going to the host.

One IP address per node where all traffic goes over. That's Ceph, SSH,
(SNMP) Monitoring, etc.

Wido

> Best
> Willi
> 
> On 5/8/20 11:57 AM, Nghia Viet Tran wrote:
>> Hi Martin,
>>
>> Thanks for your response. You mean one network interface for only MON
>> hosts or for the whole cluster including OSD hosts? I’m confusing now
>> because there are some projects that only useone public network for
>> the whole cluster. That means the rebalancing, replicating objects and
>> heartbeats from OSD hostswould affects the performance of Ceph client.
>>
>> *From: *Martin Verges <martin.verges@xxxxxxxx>
>> *Date: *Friday, May 8, 2020 at 16:20
>> *To: *Nghia Viet Tran <Nghia.Viet.Tran@xxxxxxxxxx>
>> *Cc: *"ceph-users@xxxxxxx" <ceph-users@xxxxxxx>
>> *Subject: *Re:  Cluster network and public network
>>
>> Hello Nghia,
>>
>> just use one network interface card and use frontend and backend
>> traffic on the same. No problem with that.
>>
>> If you have a dual port card, use both ports as an LACP channel and
>> maybe separate it using VLANs if you want to, but not required as well.
>>
>>
>> -- 
>>
>> Martin Verges
>> Managing director
>>
>> Mobile: +49 174 9335695
>> E-Mail: martin.verges@xxxxxxxx <mailto:martin.verges@xxxxxxxx>
>> Chat: https://t.me/MartinVerges
>>
>> croit GmbH, Freseniusstr. 31h, 81247 Munich
>> CEO: Martin Verges - VAT-ID: DE310638492
>> Com. register: Amtsgericht Munich HRB 231263
>>
>> Web: https://croit.io
>> YouTube: https://goo.gl/PGE1Bx
>>
>> Am Fr., 8. Mai 2020 um 09:29 Uhr schrieb Nghia Viet Tran
>> <Nghia.Viet.Tran@xxxxxxxxxx <mailto:Nghia.Viet.Tran@xxxxxxxxxx>>:
>>
>>     Hi everyone,
>>
>>     I have a question about the network setup. From the document, It’s
>>     recommended to have 2 NICs per hosts as described in below picture
>>
>>     Diagram
>>
>>     In the picture, OSD hosts will connect to the Cluster network for
>>     replicate and heartbeat between OSDs, therefore, we definitely need
>>     2 NICs for it. But seems there are no connections between Ceph MON
>>     and Cluster network. Can we install 1 NIC on Ceph MON then?
>>
>>     I appreciated any comments!
>>
>>     Thank you!
>>
>>     --
>>     Nghia Viet Tran (Mr)
>>
>>     _______________________________________________
>>     ceph-users mailing list -- ceph-users@xxxxxxx
>>     <mailto:ceph-users@xxxxxxx>
>>     To unsubscribe send an email to ceph-users-leave@xxxxxxx
>>     <mailto:ceph-users-leave@xxxxxxx>
>>
>>
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@xxxxxxx
>> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux