advice with hardware configuration

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



El 06/05/14 17:51, Wido den Hollander escribi?:
> On 05/06/2014 05:07 PM, Xabier Elkano wrote:
>>
>> Hi,
>>
>> I'm designing a new ceph pool with new hardware and I would like to
>> receive some suggestion.
>> I want to use a replica count of 3 in the pool and the idea is to buy 3
>> new servers with a 10-drive 2,5" chassis each and 2 10Gbps nics. I have
>> in mind two configurations:
>>
>
> Why 3 machines? That's something I would not recommend. If you want 30
> drives I'd say, go for 8 machines with 4 drives each.
>
> If a single machine fails it's 12.5% of the cluster size instead of 33%!
>
> I always advise that a failure of a single machine should be 10% or
> less of the total cluster size.
>
> Wido
The idea is to start with 3 nodes and scale them in the future. I am
aware that a server failure can be 33% less performance, but if the
whole pool performance is good enough with 3 replicas spread over 3
nodes, maybe it could coupe with that.

The biggest cost here is the racks and servers, instead of the disks,
and I prefer start with 3 high density servers and scale up them
progressively.

Do you think that this cannot good enough for production?
>
>> 1- With journal in SSDs
>>
>> OS: 2xSSD intel SC3500 100G Raid 1
>> Journal: 2xSSD intel SC3700 100G, 3 journal for each SSD
>> OSD: 6 SAS10K 900G (SAS2 6Gbps), each running an OSD process. Total size
>> for OSDs: 5,4TB
>>
>> 2- With journal in a partition in the spinners.
>>
>> OS: 2xSSD intel SC3500 100G Raid 1
>> OSD+journal: 8 SAS15K 600G (SAS3 12Gbps), each runing an OSD process and
>> its journal. Total size for OSDs: 3,6TB
>>
>> The budget in both configuration is similar, but the total capacity not.
>> What would be the best configuration from the point of view of
>> performance? In the second configuration I know the controller write
>> back cache could be very critical, the servers has a LSI 3108 controller
>> with 2GB Cache. I have to plan this storage as a KVM image backend and
>> the goal is the performance over the capacity.
>>
>> On the other hand, with these new hardware, what would be the best
>> choice: create a new pool in an existing cluster or create a complete
>> new cluster? Are there any advantages in creating and maintaining an
>> isolated new cluster?
>>
>> thanks in advance,
>> Xabier
>>
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users at lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
>



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux