Re: optimal setup with 4 x ethernet ports

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I found network to be the most limiting factor in Ceph.
Any chance to move to 10G+ would be beneficial.
I did have success with Bonding and just doing a simple RR increased the throughput.


On Mon, Dec 2, 2013 at 10:17 PM, Kyle Bader <kyle.bader@xxxxxxxxx> wrote:

> Is having two cluster networks like this a supported configuration? Every osd and mon can reach every other so I think it should be.

Maybe. If your back end network is a supernet and each cluster network is a subnet of that supernet. For example:

Ceph.conf cluster network (supernet): 10.0.0.0/8

Cluster network #1:  10.1.1.0/24
Cluster network #2: 10.1.2.0/24

With that configuration OSD address autodection *should* just work.

> 1. move osd traffic to eth1. This obviously limits maximum throughput to ~100Mbytes/second, but I'm getting nowhere near that right now anyway.

Given three links I would probably do this if your replication factor is >= 3. Keep in mind 100Mbps links could very well end up being a limiting factor.

What are you backing each OSD with storage wise and how many OSDs do you expect to participate in this cluster?


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




--
Follow Me: @Scottix
http://about.me/scottix
Scottix@xxxxxxxxx
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux