advice with hardware configuration

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



El 06/05/14 18:17, Christian Balzer escribi?:
> On Tue, 6 May 2014 18:57:04 +0300 Sergey Malinin wrote:
>
>> My vision of a well built node is when number of journal disks is equal
>> to number of data disks. You definitely don't want to lose 3 journals at
>> once in case of single drive failure.
>>
> While that certainly is true not everybody is having unlimited budgets. 
>
> I'd expect the DC3700 to outlast the spinning rust, especially if the
> implementor is SMART enough to be replace things before something
> unforetold were to happen.
>
> However using a 100GB DC3700 with those drives isn't particular wise
> performance wise. I'd at least use the 200GB ones.
Hi Christian, you are right, I should use the 200GB ones at least. Thanks!
>
> Regards,
>
> Christian
>>> 06 ??? 2014 ?., ? 18:07, Xabier Elkano <xelkano at hostinet.com>
>>> ???????(?):
>>>
>>>
>>> Hi,
>>>
>>> I'm designing a new ceph pool with new hardware and I would like to
>>> receive some suggestion.
>>> I want to use a replica count of 3 in the pool and the idea is to buy 3
>>> new servers with a 10-drive 2,5" chassis each and 2 10Gbps nics. I have
>>> in mind two configurations:
>>>
>>> 1- With journal in SSDs
>>>
>>> OS: 2xSSD intel SC3500 100G Raid 1
>>> Journal: 2xSSD intel SC3700 100G, 3 journal for each SSD
>>> OSD: 6 SAS10K 900G (SAS2 6Gbps), each running an OSD process. Total
>>> size for OSDs: 5,4TB
>>>
>>> 2- With journal in a partition in the spinners.
>>>
>>> OS: 2xSSD intel SC3500 100G Raid 1
>>> OSD+journal: 8 SAS15K 600G (SAS3 12Gbps), each runing an OSD process
>>> and its journal. Total size for OSDs: 3,6TB
>>>
>>> The budget in both configuration is similar, but the total capacity
>>> not. What would be the best configuration from the point of view of
>>> performance? In the second configuration I know the controller write
>>> back cache could be very critical, the servers has a LSI 3108
>>> controller with 2GB Cache. I have to plan this storage as a KVM image
>>> backend and the goal is the performance over the capacity.
>>>
>>> On the other hand, with these new hardware, what would be the best
>>> choice: create a new pool in an existing cluster or create a complete
>>> new cluster? Are there any advantages in creating and maintaining an
>>> isolated new cluster?
>>>
>>> thanks in advance,
>>> Xabier
>>>
>>>
>>> _______________________________________________
>>> ceph-users mailing list
>>> ceph-users at lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users at lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux