Ceph with high disk densities?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I brought this up within the context of the RAID discussion, but it did not garner any responses. [1]

In our small test deployments (160 HDs and OSDs across 20 machines) our performance is quickly bounded by CPU and memory overhead. These are 2U machines with 2x 6-core Nehalem; and running 8 OSDs consumed 25% of the total CPU time. This was a cuttlefish deployment.

This seems like a rather high CPU overhead. Particularly when we are looking to hit density target of 10-15 4TB drives / U within 1.5 years. Does anyone have suggestions for hitting this requirement? Are there ways to reduce CPU and memory overhead per OSD?

My one suggestion was to do some form of RAID to join multiple drives and present them to a single OSD. A 2 drive RAID-0 would halve the OSD overhead while doubling the failure rate and doubling the rebalance overhead. It is not clear to me if that is better or not.

[1] http://lists.ceph.com/pipermail/ceph-users-ceph.com/2013-October/004833.html
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux