Re: Ceph with high disk densities?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Scott,

Just some observations from here.

We run 8 nodes, 2U units with 12x OSD each (4x 500GB ssd, 8x 4TB platter) attached to 2x LSI 2308 cards. Each node uses an intel E5-2620 with 32G mem.

Granted, we only have like 25 VM (some fairly io-hungry, both iops and throughput-wise though) on that cluster, but we hardly see any cpu-usage at all. We have ~6k PG and according to munin our avg. cpu time is ~9% (that is out of all cores, so 9% out of 1200% (6 cores, 6 HT)).

Sadly I didn't record cpu-usage while stresstesting or breaking it.

We're using cuttlefish and XFS. And again, this cluster is still pretty underused, so the cpu-usage does not reflect a more active system.

Cheers,
Martin


On Mon, Oct 7, 2013 at 6:15 PM, Scott Devoid <devoid@xxxxxxx> wrote:
I brought this up within the context of the RAID discussion, but it did not garner any responses. [1]

In our small test deployments (160 HDs and OSDs across 20 machines) our performance is quickly bounded by CPU and memory overhead. These are 2U machines with 2x 6-core Nehalem; and running 8 OSDs consumed 25% of the total CPU time. This was a cuttlefish deployment.

This seems like a rather high CPU overhead. Particularly when we are looking to hit density target of 10-15 4TB drives / U within 1.5 years. Does anyone have suggestions for hitting this requirement? Are there ways to reduce CPU and memory overhead per OSD?

My one suggestion was to do some form of RAID to join multiple drives and present them to a single OSD. A 2 drive RAID-0 would halve the OSD overhead while doubling the failure rate and doubling the rebalance overhead. It is not clear to me if that is better or not.


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux