Re: Beginner question netwokr configuration best practice

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thank you, you answer helps a lot!

On 15.11.19 13:21, Wido den Hollander wrote:


On 11/15/19 12:57 PM, Willi Schiegel wrote:
Hello All,

I'm starting to setup a Ceph cluster and am confused about the
recommendations for the network setup.

In the Mimic manual I can read

"We recommend running a Ceph Storage Cluster with two networks: a public
(front-side) network and a cluster (back-side) network."

In the Nautilus manual there is

"Ceph functions just fine with a public network only, but you may see
significant performance improvement with a second “cluster” network in a
large cluster.

It is possible to run a Ceph Storage Cluster with two networks: a public
(front-side) network and a cluster (back-side) network. However, this
approach complicates network configuration (both hardware and software)
and does not usually have a significant impact on overall performance.
For this reason, we generally recommend that dual-NIC systems either be
configured with two IPs on the same network, or bonded."

Am I misunderstanding something or is "significant performance
improvement" and "does not usually have a significant impact on overall
performance" in the Nautilus doc contradictory? So, which way to go?


There is no need to have a public and cluster network with Ceph. Working
as a Ceph consultant I've deployed multi-PB Ceph clusters with a single
public network without any problems. Each node has a single IP-address,
nothing more, nothing less.

The whole idea of a separated public/cluster dates back from the time
when 10G was expensive. But nowadays having 2x25G per node isn't that
expensive anymore and is sufficient for allmost all use cases.

I'd save the money for a second network and spend it on a additional
machine in the cluster. That let's you scale out even more.

My philosophy: One node, one IP.

I've deployed dozens of clusters this way and they all work fine :-)

Wido

Thank you very much
Willi


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux