advice with hardware configuration

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



El 06/05/14 18:40, Christian Balzer escribi?:
> Hello,
>
> On Tue, 06 May 2014 17:07:33 +0200 Xabier Elkano wrote:
>
>> Hi,
>>
>> I'm designing a new ceph pool with new hardware and I would like to
>> receive some suggestion.
>> I want to use a replica count of 3 in the pool and the idea is to buy 3
>> new servers with a 10-drive 2,5" chassis each and 2 10Gbps nics. I have
>> in mind two configurations:
>>
> As Wido said, more nodes are usually better, unless you're quite aware of
> what you're doing and why.
Yes, I know that, but what is the minimum number of nodes to start with?
Start with three nodes is not a feasible option?
>  
>> 1- With journal in SSDs
>>  
>> OS: 2xSSD intel SC3500 100G Raid 1
>> Journal: 2xSSD intel SC3700 100G, 3 journal for each SSD
> As I wrote just a moment ago, use at least the 200GB ones if performance
> is such an issue for you.
> If you can afford it, use 4 3700s and share OS and journal, the OS IOPS
> will not be that significant, especially if you're using a writeback cache
> controller. 
the journal can be shared with the OS, but I like the RAID 1 for the OS.
I think that the only drawback with it is that I am using two dedicated
disk slots for OS.
>
>> OSD: 6 SAS10K 900G (SAS2 6Gbps), each running an OSD process. Total size
>> for OSDs: 5,4TB
>>
>> 2- With journal in a partition in the spinners.
>>
>> OS: 2xSSD intel SC3500 100G Raid 1
>> OSD+journal: 8 SAS15K 600G (SAS3 12Gbps), each runing an OSD process and
>> its journal. Total size for OSDs: 3,6TB
>>
> I have no idea why anybody would spend money on 12Gb/s HDDs when even
> most SSDs have trouble saturating a 6Gb/s link.
> Given the double write penalty in IOPS, I think you're going to find
> this more expensive (per byte) and slower than a well rounded option 1.
But these disks are 2,5" 15K, not only for the link. Other SAS 2,5"
(SAS2) disks I found are only 10K. The 15K disks should be better for
random IOPS.
>
>> The budget in both configuration is similar, but the total capacity not.
>> What would be the best configuration from the point of view of
>> performance? In the second configuration I know the controller write
>> back cache could be very critical, the servers has a LSI 3108 controller
>> with 2GB Cache. I have to plan this storage as a KVM image backend and
>> the goal is the performance over the capacity.
>>
> Writeback cache can be very helpful, however it is not a miracle cure.
> Not knowing your actual load and I/O patterns it might very well be
> enough, though.
The IO patterns are a bit unknown, I should assume 40% read and 60%
write, but the IO size is unknown, because the storage is for KVM images
and the VMs are for many customers and different purposes.
>
> Regards,
>
> Christian



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux