Re: Maximum MON Network Throughput Requirements

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks.
Our initial deployment will be 8 OSD nodes containing 24 OSDs each (spinning rust, not ssd). Each node will contain 2 PCIe p3700 NVMe for journals. I expect us to grow to a maximum of 15 OSD nodes.

I'll just keep 40 gig on everything for the sake of consistency and not risk under-sizing my monitor nodes.

On May 2, 2016 6:17 PM, "Chris Jones" <cjones@xxxxxxxxxxx> wrote:
Mons and RGWs only use the public network but Mons can have a good deal of traffic. I would not recommend 1Gb but if looking for lower bandwidth then 10Gb would be good for most. It all depends in the overall size of the cluster. You mentioned 40Gb. If the nodes are high density then 40Gb but if they are lower density then 20Gb would be fine.

-CJ

On Mon, May 2, 2016 at 12:09 PM, Brady Deetz <bdeetz@xxxxxxxxx> wrote:
I'm working on finalizing designs for my Ceph deployment. I'm currently leaning toward 40gbps ethernet for interconnect between OSD nodes and to my MDS servers. But, I don't really want to run 40 gig to my mon servers unless there is a reason. Would there be an issue with using 1 gig on my monitor servers?

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




--
Best Regards,
Chris Jones


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux