osds on 2 nodes vs. on one node

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



In a small cluster I have 2 OSD nodes with identical hardware, each with 6 osds.

* Configuration 1:  I shut down the osds on one node so I am using 6 OSDS on a single node

* Configuration 2:  I shut down 3 osds on each node so now I have 6 total OSDS but 3 on each node.

I measure read performance using rados bench from a separate client node.
The client has plenty of spare CPU power and the network and disk utilization are not limiting factors.
In all cases, the pool type is replicated so we're just reading from the primary.

With Configuration 1, I see approximately 70% more bandwidth than with configuration 2.
In general, any configuration where the osds span 2 nodes gets poorer performance but in particular
when the 2 nodes have equal amounts of traffic.

Is there any ceph parameter that might be throttling the cases where osds span 2 nodes?

-- Tom Deneau, AMD
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux