advice with hardware configuration

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



El 06/05/14 17:57, Sergey Malinin escribi?:
> My vision of a well built node is when number of journal disks is equal to number of data disks. You definitely don't want to lose 3 journals at once in case of single drive failure.
thanks for your resonse. This is true, a single SSD failure also mean 3
OSD failure (50% loss capacity of each node and 16% of total capacity ),
but the journal SSDs are intel SC3700 and them should be very reliable.
>> 06 ??? 2014 ?., ? 18:07, Xabier Elkano <xelkano at hostinet.com> ???????(?):
>>
>>
>> Hi,
>>
>> I'm designing a new ceph pool with new hardware and I would like to
>> receive some suggestion.
>> I want to use a replica count of 3 in the pool and the idea is to buy 3SC3700
>> new servers with a 10-drive 2,5" chassis each and 2 10Gbps nics. I have
>> in mind two configurations:
>>
>> 1- With journal in SSDs
>>
>> OS: 2xSSD intel SC3500 100G Raid 1
>> Journal: 2xSSD intel SC3700 100G, 3 journal for each SSD
>> OSD: 6 SAS10K 900G (SAS2 6Gbps), each running an OSD process. Total size
>> for OSDs: 5,4TB
>>
>> 2- With journal in a partition in the spinners.
>>
>> OS: 2xSSD intel SC3500 100G Raid 1
>> OSD+journal: 8 SAS15K 600G (SAS3 12Gbps), each runing an OSD process and
>> its journal. Total size for OSDs: 3,6TB
>>
>> The budget in both configuration is similar, but the total capacity not.
>> What would be the best configuration from the point of view of
>> performance? In the second configuration I know the controller write
>> back cache could be very critical, the servers has a LSI 3108 controller
>> with 2GB Cache. I have to plan this storage as a KVM image backend and
>> the goal is the performance over the capacity.
>>
>> On the other hand, with these new hardware, what would be the best
>> choice: create a new pool in an existing cluster or create a complete
>> new cluster? Are there any advantages in creating and maintaining an
>> isolated new cluster?
>>
>> thanks in advance,
>> Xabier
>>
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users at lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux