Re: Ceph cluster network bandwidth?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



That depends on another question.  Does the client write all 3 copies or does the client send the copy to the primary OSD and then the primary OSD sends the write to the secondaries?  Someone asked this recently, but I don't recall if an answer was given.  I'm not actually certain which is the case.  If it's the latter then the 10Gb pipe from the client is all you need.

If I had to guess, the client sends the writes to all OSDs, but that maxing the 10Gb pipe for 1 client isn't really your concern.  Few use cases would have a single client using 100% of the bandwidth.  For RGW, spin up a few more RGW daemons and balance them with an LB.  CephFS the clients communicate with the OSDs directly and you probably shouldn't use a network FS for a single client.  RBD is the likely place where this could happen, but few 6 server deployments are being used by a single client using all of the RBDs.  What I'm getting at is 3 clients with 10Gb can come pretty close to fully saturating the 10Gb ethernet on the cluster.  Likely at least to the point where the network pipe is not the bottleneck (OSD node CPU, OSD spindle speeds, etc).

On Thu, Nov 16, 2017 at 9:46 AM Sam Huracan <nowitzki.sammy@xxxxxxxxx> wrote:
Hi,

We intend build a new Ceph cluster with 6 Ceph OSD hosts, 10 SAS disks every host, using 10Gbps NIC for client network, object is replicated 3.

So, how could I sizing the cluster network for best performance?
As i have read, 3x replicate means 3x bandwidth client network = 30 Gbps, is it true? I think it is too much and make great cost

Do you give me a suggestion?

Thanks in advance.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux