Re: Number of SSD for OSD journal

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



15.12.2014 23:45, Sebastien Han пишет:
> Salut,
> 
> The general recommended ratio (for me at least) is 3 journals per SSD. Using 200GB Intel DC S3700 is great.
> If you’re going with a low perf scenario I don’t think you should bother buying SSD, just remove them from the picture and do 12 SATA 7.2K 4TB.
> 
> For medium and medium ++ perf using a ratio 1:11 is way to high, the SSD will definitely be the bottleneck here.
> Please also note that (bandwidth wise) with 22 drives you’re already hitting the theoretical limit of a 10Gbps network. (~50MB/s * 22 ~= 1.1Gbps).
> You can theoretically up that value with LACP (depending on the xmit_hash_policy you’re using of course).
> 
> Btw what’s the network? (since I’m only assuming here).
> 
> 
>> On 15 Dec 2014, at 20:44, Florent MONTHEL <fmonthel@xxxxxxxxxxxxx> wrote:
>>
>> Hi,
>>
>> I’m buying several servers to test CEPH and I would like to configure journal on SSD drives (maybe it’s not necessary for all use cases)
>> Could you help me to identify number of SSD I need (SSD are very expensive and GB price business case killer… ) ? I don’t want to experience SSD bottleneck (some abacus ?).
>> I think I will be with below CONF 2 & 3
>>
>>
>> CONF 1 DELL 730XC "Low Perf":
>> 10 SATA 7.2K 3.5  4TB + 2 SSD 2.5 » 200GB "intensive write"
>>
>> CONF 2 DELL 730XC « Medium Perf" :
>> 22 SATA 7.2K 2.5 1TB + 2 SSD 2.5 » 200GB "intensive write"
>>
>> CONF 3 DELL 730XC « Medium Perf ++" :
>> 22 SAS 10K 2.5 1TB + 2 SSD 2.5 » 200GB "intensive write"
>>
>> Thanks
>>
>> Florent Monthel
>>

This is also have another way.
* for CONF 2,3 replace 200Gb SSD to 800Gb and add another 1-2 SSD to
each node.
* make tier1 read-write cache on SSDs
* also you can add journal partition on them if you wish - then data
will moving from SSD to SSD before let down on HDD
* on HDD you can make erasure pool or replica pool

You have 10Gbit Eth, 4 SSD that also used for journals - then you may
will have bottleneck in NIC, than in future easy avoid of replace NIC.

In my opinion, backend network must be equivalent or faster then
frontend one, because time spend for balance cluster it very important,
and must be very low, to aim to zero.


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux