Re: Choosing hp sata or sas SSDs for journals

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

On Tue, 3 Nov 2015 12:01:16 +0100 Karsten Heymann wrote:

> Hi,
> 
> has anyone experiences with hp-branded ssds for journaling? Given that
> everything else is fixed (raid controller, cpu, etc...) and a fixed

A raid controller that can hopefully be run well in JBOD mode or something
mimicking it closely...

> budget, would it be better to go with more of the cheaper 6G SATA Write
> intensive drives or should I aim for (then fewer) 12G SAS models? Here
> are the specs:
> 
> HP 6G SATA Write Intensive 200 GB (804639-B21):
> - Sequential reads / writes (MB/s): 540 / 300
> - Random reads /writes (IOPS): 64,500 / 42,000
> - DWPD: 10
> 
> HP 12G SAS Mainstream Endurance 200 GB (779164-B21):
> - Sequential reads / writes (MB/s): 1,000 / 510
> - Random reads /writes (IOPS): 70,000 / 51,000
> - DWPD: 10
> 
> HP 12G SAS Write Intensive 200 GB (802578-B21):
> - Sequential reads / writes (MB/s): 1,000 / 660
> - Random reads /writes (IOPS): 106,000 / 83,000
> - DWPD: 25
> 
> (Source: http://www8.hp.com/h20195/v2/GetPDF.aspx%2F4AA4-7186ENW.pdf)
> 
> I know that asking does not free me from benchmarking, but maybe someone
> has a rough estimate?
>
Unless you can find out who the original manufacturer is and what models
they are you will indeed have to benchmark things, as they may be
completely unsuitable for Ceph journals (see the countless threads here,
especially with regards to some Samsung products).

Firstly the sequential reads and random IOPS are pointless, the speed at
which the SSD can do sequential direct, sync I/O is the only factor that
counts when it comes to Ceph journals. 

Since you have a "fixed" budget, how many HDDs do you plan per node and/or
how many SSDs can you actually fit per node?

A DWPD of 10 is with near certainty going to be sufficient, unless you
plan to put way too many journals per SSD.
  
Looking at the SSDs above, the last one is likely to be far more expensive
than the rest and barely needs the 12Gb/s interface (for writes). So
probably the worst choice. 

SSD #1 will serve 3 HDDs nicely, so that would work out well for something
with 8 bays, 6 HDDs and 2 SDDs and similar configurations. It will also be
the cheapest one and provide smaller failure domains.

SSD #2 can handle 5-6 HDDs, so if your cluster is big enough it might be a
good choice for denser nodes.

Note that when looking at something similar I did choose 4 100GB DC S3700
over 2 200GB DC S3700 as the prices were nearly identical, the smaller
SSDs gave me 800MB/s total instead of 730MB/s and with 8 HDDs per node I
only would loose 2 OSDs in case of SSD failure.

Christian

> Best regards
> Karsten


-- 
Christian Balzer        Network/Systems Engineer                
chibi@xxxxxxx   	Global OnLine Japan/Fusion Communications
http://www.gol.com/
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux