Re: drives per CPU core?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Jonathan,

On 02/20/2013 12:28 PM, Jonathan Rudenberg wrote:
I'm currently planning a CEPH deployment, and we're looking at 36x4TB drives per node. It seems like the recommended setup is an OSD per drive, is this accurate? What is the recommended ratio of drives/OSDs per CPU core? Would 12 cores be enough (3:1 ratio)?

Typically 1 drive per OSD is the way to go, but once you get up into the 36+ drives per node range there start becoming trade-offs (especially with things like memory usage during recovery, etc). You may need to do some testing to make sure that you don't end up hitting swap.

I've got a SC847a chassis we are using for testing at Inktank with 36 bays. I'm using dual E5-2630ls and that seems to be working pretty well, but I wouldn't go any slower than those chips. E5-2630s or 2640s might be a bit better, but so far it looks like ivy bridge is fast enough that you can fudge a bit on our "1ghz of CPU per OSD" guideline and get a pair of the cheaper 6-core chips.

Mark


Thanks,

Jonathan
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux