Re: Number of SSD for OSD journal

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Salut,

The general recommended ratio (for me at least) is 3 journals per SSD. Using 200GB Intel DC S3700 is great.
If you’re going with a low perf scenario I don’t think you should bother buying SSD, just remove them from the picture and do 12 SATA 7.2K 4TB.

For medium and medium ++ perf using a ratio 1:11 is way to high, the SSD will definitely be the bottleneck here.
Please also note that (bandwidth wise) with 22 drives you’re already hitting the theoretical limit of a 10Gbps network. (~50MB/s * 22 ~= 1.1Gbps).
You can theoretically up that value with LACP (depending on the xmit_hash_policy you’re using of course).

Btw what’s the network? (since I’m only assuming here).


> On 15 Dec 2014, at 20:44, Florent MONTHEL <fmonthel@xxxxxxxxxxxxx> wrote:
> 
> Hi,
> 
> I’m buying several servers to test CEPH and I would like to configure journal on SSD drives (maybe it’s not necessary for all use cases)
> Could you help me to identify number of SSD I need (SSD are very expensive and GB price business case killer… ) ? I don’t want to experience SSD bottleneck (some abacus ?).
> I think I will be with below CONF 2 & 3
> 
> 
> CONF 1 DELL 730XC "Low Perf":
> 10 SATA 7.2K 3.5  4TB + 2 SSD 2.5 » 200GB "intensive write"
> 
> CONF 2 DELL 730XC « Medium Perf" :
> 22 SATA 7.2K 2.5 1TB + 2 SSD 2.5 » 200GB "intensive write"
> 
> CONF 3 DELL 730XC « Medium Perf ++" :
> 22 SAS 10K 2.5 1TB + 2 SSD 2.5 » 200GB "intensive write"
> 
> Thanks
> 
> Florent Monthel
> 
> 
> 
> 
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Cheers.
––––
Sébastien Han
Cloud Architect

"Always give 100%. Unless you're giving blood."

Phone: +33 (0)1 49 70 99 72
Mail: sebastien.han@xxxxxxxxxxxx
Address : 11 bis, rue Roquépine - 75008 Paris
Web : www.enovance.com - Twitter : @enovance

Attachment: signature.asc
Description: Message signed with OpenPGP using GPGMail

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux