Re: drives per CPU core?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Not to derail this conversation, but if looking at purely cpu/core &
cost ratio then AMD have their 62xx, 63xx lines with up to 16 cores
pr. cpu. You can plunk these into 2x cpu boards and get 32 cores
relatively cheap (no, I do not count Intels HT as a core..).

But, as has been said already, those estimates are recommendations
while doing recovery. Under normal usage the cpu is not doing much.

/Martin

On Wed, Feb 20, 2013 at 7:55 PM, Gregory Farnum <greg@xxxxxxxxxxx> wrote:
> On Wed, Feb 20, 2013 at 10:36 AM, Mark Nelson <mark.nelson@xxxxxxxxxxx> wrote:
>> Hi Jonathan,
>>
>>
>> On 02/20/2013 12:28 PM, Jonathan Rudenberg wrote:
>>>
>>> I'm currently planning a CEPH deployment, and we're looking at 36x4TB
>>> drives per node. It seems like the recommended setup is an OSD per drive, is
>>> this accurate? What is the recommended ratio of drives/OSDs per CPU core?
>>> Would 12 cores be enough (3:1 ratio)?
>>
>>
>> Typically 1 drive per OSD is the way to go, but once you get up into the 36+
>> drives per node range there start becoming trade-offs  (especially with
>> things like memory usage during recovery, etc).  You may need to do some
>> testing to make sure that you don't end up hitting swap.
>>
>> I've got a SC847a chassis we are using for testing at Inktank with 36 bays.
>> I'm using dual E5-2630ls and that seems to be working pretty well, but I
>> wouldn't go any slower than those chips.  E5-2630s or 2640s might be a bit
>> better, but so far it looks like ivy bridge is fast enough that you can
>> fudge a bit on our "1ghz of CPU per OSD" guideline and get a pair of the
>> cheaper 6-core chips.
>
> That 1GHz per daemon recommendation is based on recovery performance;
> in general usage it'll often be much lower. I don't think you've done
> much with recovery yet, so don't count on that ratio working out once
> you do so!
> -Greg
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux