Re: servers advise (dell r515 or supermicro ....)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Alexandre,

Are you going with a 10Gb network? It’s not an issue for IOPS but more for the bandwidth. If so read the following:

I personally won’t go with a ratio of 1:6 for the journal. I guess 1:5 (or even 1:4) is preferable.
SAS 10K gives you around 140MB/sec for sequential writes.
So if you use a journal with an SSD, you expect at least 140MB if you don’t want to slow things down.
If you do so 140*10 (disks): fulfil your 10GB bandwidth already. So either you don’t need that much disks either you don’t need SSDs.
It depends on the performance that you want to achieve.
Another thing, I also won’t use the DC S3700 since this disk was definitely made for IOPS intensive applications. The journal is purely sequential (small seq block, IIRC Stephan mentioned 370k blocks).
I will instead use with a SSD with large sequential capabilities like 525 series 120GB. 

Cheers.
–––– 
Sébastien Han 
Cloud Engineer 

"Always give 100%. Unless you're giving blood.” 

Phone: +33 (0)1 49 70 99 72 
Mail: sebastien.han@xxxxxxxxxxxx 
Address : 10, rue de la Victoire - 75009 Paris 
Web : www.enovance.com - Twitter : @enovance 

On 15 Jan 2014, at 12:47, Alexandre DERUMIER <aderumier@xxxxxxxxx> wrote:

> Hello List,
> 
> I'm going to build a build a rbd cluster this year, with 5 nodes
> 
> I would like to have this kind of configuration for each node:
> 
> - 2U
> - 2,5inch drives
> 
> os : 2 disk sas drive
> journal : 2 x ssd intel dc s3700 100GB
> osd : 10 or 12  x sas Seagate Savvio 10K.6 900GB
> 
> 
> 
> I see on the mailing that intank use dell r515. 
> I currently own a lot of dell servers and I have good prices.
> 
> But I have also see on the mailing that dell perc H700 can have some performance problem,
> and also it's not easy to flash the firmware for jbod mode.
> http://www.spinics.net/lists/ceph-devel/msg16661.html
> 
> I don't known if theses performance problem has finally been solved ?
> 
> 
> 
> Another option could be to use supermicro server,
> they have some 2U - 16 disks chassis + one or two lsi jbod controller.
> But, I have had in past really bad experience with supermicro motherboard.
> (Mainly firmware bug, ipmi card bug,.....)
> 
> Does someone have experience with supermicro, and give me advise for a good motherboard model? 
> 
> 
> Best Regards,
> 
> Alexandre Derumier
> 
> 
> 
> 
> 
> 
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Attachment: signature.asc
Description: Message signed with OpenPGP using GPGMail

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux