Hi, 2015-11-04 6:55 GMT+01:00 Christian Balzer <chibi@xxxxxxx>: > On Tue, 3 Nov 2015 12:01:16 +0100 Karsten Heymann wrote: >> has anyone experiences with hp-branded ssds for journaling? Given that >> everything else is fixed (raid controller, cpu, etc...) and a fixed > > A raid controller that can hopefully be run well in JBOD mode or something > mimicking it closely... Yes, although they have to be created as one-disk raids. But I need a raid 1 for the system disks anyway and we generally don't buy servers without raid controllers in case they have to be repurposed. And the battery backed writeback cache may even increase the performance. >> HP 6G SATA Write Intensive 200 GB (804639-B21): >> HP 12G SAS Mainstream Endurance 200 GB (779164-B21): >> HP 12G SAS Write Intensive 200 GB (802578-B21): >> I know that asking does not free me from benchmarking, but maybe someone >> has a rough estimate? > Unless you can find out who the original manufacturer is and what models > they are you will indeed have to benchmark things, as they may be > completely unsuitable for Ceph journals (see the countless threads here, > especially with regards to some Samsung products). The only vendor information I could find is from 2011, then the manufacturer was sandisk. > Firstly the sequential reads and random IOPS are pointless, the speed at > which the SSD can do sequential direct, sync I/O is the only factor that > counts when it comes to Ceph journals. ok. > Since you have a "fixed" budget, how many HDDs do you plan per node and/or > how many SSDs can you actually fit per node? I'm currently planning to use dl380 with 26 (24 at the front, two for system disks at the back) 2,5"-slots, from which roughly 2/3 are intended for osd drives, the rest for system and journal disks. > A DWPD of 10 is with near certainty going to be sufficient, unless you > plan to put way too many journals per SSD. Good to know. > Looking at the SSDs above, the last one is likely to be far more expensive > than the rest and barely needs the 12Gb/s interface (for writes). So > probably the worst choice. Agreed. > SSD #1 will serve 3 HDDs nicely, so that would work out well for something > with 8 bays, 6 HDDs and 2 SDDs and similar configurations. It will also be > the cheapest one and provide smaller failure domains. > > SSD #2 can handle 5-6 HDDs, so if your cluster is big enough it might be a > good choice for denser nodes. So 18 spinning drives, 6 SSD (model #1) and two system disks seem to be at least a reasonable choice for a setup to start with? > Note that when looking at something similar I did choose 4 100GB DC S3700 > over 2 200GB DC S3700 as the prices were nearly identical, the smaller > SSDs gave me 800MB/s total instead of 730MB/s and with 8 HDDs per node I > only would loose 2 OSDs in case of SSD failure. 200GB are the smallest enterprise drives HP sells for current server generations. Thanks a lot for your input, Karsten _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com