Re: Cores/Memory/GHz recommendation for SSD based OSD servers

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On Thursday, April 2, 2015, Nick Fisk <nick@xxxxxxxxxx> wrote:
I'm probably going to get shot down for saying this...but here goes.

As a very rough guide, think of it more as you need around 10Mhz for every IO, whether that IO is 4k or 4MB it uses roughly the same amount of CPU, as most of the CPU usage is around ceph data placement rather than the actual read/writes to disk.

That piece of information is, by far, one of the most helpful things I've ever read on this list regarding hardware configuration. Thanks for sharing that!

That calculation came close to my cluster's max iops.  I've seen just over 11k iops(under ideal conditions with short bursts of io) the 10Mhz calculation says 12k iops.  

For the record, my cluster is 6 osd nodes, each node has:
2x 4 core, 2.5GHz CPUs
32GB RAM
7x 3.5" 7.2k rpms 2TB disks (one for each osd) 
RAID card with 1GB write-back cache w/ BBU
2x 40Gb NIC
No ssd journals

What effect does replication have on the 10Mhz/iop number, in your experience?  My 11k iops was achieved with 2x replication.  I've seen over 10k iops with 3x replication. Typically, I can get 2k - 3k iops with long sequential io patterns. 

I'm getting my budget ready for next quarter, so I've been trying to decide how to spend money to best improve Ceph performance.  

To improve long sequential write io, I've been debating adding a PCI flash accelerator card to each osd node vs just adding another 6 osd nodes. The cost is about the same. 


I can nearly saturate 12x2.1ghz cores with a single SSD, doing 4k ios at high queue depths.

Which brings us back to your original question, rather than asking how much CPU for x amount of SSD's. How many IOs do you require out your cluster?

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux