Re: Choosing hp sata or sas SSDs for journals

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

On Wed, 4 Nov 2015 12:03:51 +0100 Karsten Heymann wrote:

> Hi,
> 
> 2015-11-04 6:55 GMT+01:00 Christian Balzer <chibi@xxxxxxx>:
> > On Tue, 3 Nov 2015 12:01:16 +0100 Karsten Heymann wrote:
> >> has anyone experiences with hp-branded ssds for journaling? Given that
> >> everything else is fixed (raid controller, cpu, etc...) and a fixed
> >
> > A raid controller that can hopefully be run well in JBOD mode or
> > something mimicking it closely...
> 
> Yes, although they have to be created as one-disk raids. But I need a
> raid 1 for the system disks anyway and we generally don't buy servers
> without raid controllers in case they have to be repurposed. And the
> battery backed writeback cache may even increase the performance.
> 
Some controllers will use the cache (even in real JBOD mode), with fake
RAID0 ones that's assumed. However you may have not full SMART access to
those drives, which can be a PITA at times.

> >> HP 6G SATA Write Intensive 200 GB (804639-B21):
> >> HP 12G SAS Mainstream Endurance 200 GB (779164-B21):
> >> HP 12G SAS Write Intensive 200 GB (802578-B21):
> 
> >> I know that asking does not free me from benchmarking, but maybe
> >> someone has a rough estimate?
> 
> > Unless you can find out who the original manufacturer is and what
> > models they are you will indeed have to benchmark things, as they may
> > be completely unsuitable for Ceph journals (see the countless threads
> > here, especially with regards to some Samsung products).
> 
> The only vendor information I could find is from 2011, then the
> manufacturer was sandisk.
>
No idea really, but there are SANDISK (now BORG'ed by WD) employees in
this ML.
 
> > Firstly the sequential reads and random IOPS are pointless, the speed
> > at which the SSD can do sequential direct, sync I/O is the only factor
> > that counts when it comes to Ceph journals.
> 
> ok.
> 
> > Since you have a "fixed" budget, how many HDDs do you plan per node
> > and/or how many SSDs can you actually fit per node?
> 
> I'm currently planning to use dl380 with 26 (24 at the front, two for
> system disks at the back) 2,5"-slots, from which roughly 2/3 are
> intended for osd drives, the rest for system and journal disks.
> 
That's a pretty dense configuration, how many nodes do you plan to deploy
initially? 
What network infrastructure?

Check the archives for previous threads, I would allocate about 2 GHz of
CPU per OSD...


> > A DWPD of 10 is with near certainty going to be sufficient, unless you
> > plan to put way too many journals per SSD.
> 
> Good to know.
> 
> > Looking at the SSDs above, the last one is likely to be far more
> > expensive than the rest and barely needs the 12Gb/s interface (for
> > writes). So probably the worst choice.
> 
> Agreed.
> 
> > SSD #1 will serve 3 HDDs nicely, so that would work out well for
> > something with 8 bays, 6 HDDs and 2 SDDs and similar configurations.
> > It will also be the cheapest one and provide smaller failure domains.
> >
> > SSD #2 can handle 5-6 HDDs, so if your cluster is big enough it might
> > be a good choice for denser nodes.
> 
> So 18 spinning drives, 6 SSD (model #1) and two system disks seem to
> be at least a reasonable choice for a setup to start with?
> 
Yes.
Note that in my example below the system disks are a RAID10 of the 4 SSDs,
with raw partitions for the journals.

> > Note that when looking at something similar I did choose 4 100GB DC
> > S3700 over 2 200GB DC S3700 as the prices were nearly identical, the
> > smaller SSDs gave me 800MB/s total instead of 730MB/s and with 8 HDDs
> > per node I only would loose 2 OSDs in case of SSD failure.
> 
> 200GB are the smallest enterprise drives HP sells for current server
> generations.
> 
Yeah, but when you look at the Intel DC S37xx drives for example, the
older (more parallel) SSDs are actually faster at smaller size then the
new ones.

Christian

> Thanks a lot for your input,
> Karsten
> 


-- 
Christian Balzer        Network/Systems Engineer                
chibi@xxxxxxx   	Global OnLine Japan/Fusion Communications
http://www.gol.com/
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux