On 04/21/2015 03:04 AM, Andrei Mikhailovsky wrote:
Hi I have been testing the Samsung 840 Pro (128gb) for quite sometime and I can also confirm that this drive is unsuitable for osd journal. The performance and latency that I get from these drives (according to ceph osd perf) are between 10 - 15 times slower compared to the Intel 520. The Intel 530 drives are also pretty awful. They are meant to be a replacement of the 520 drives, but the performance is pretty bad. I have found Intel 520 to be a reasonable drive for performance per price, for a cluster without a great deal of writes. However they do not make those anymore.
Be very careful with the 520s. I myself have some in my test lab and they are fantastic as far as performance, but I think it's possible they may ignore ATA_CMD_FLUSH. O_DSYNC writes are very fast on them yet they don't have any apparent power loss protection. I suspect they fixed this in the 530 which is why it is so much slower at O_DSYNC writes.
Otherwise, it seems that the Intel 3600 and 3700 series is a good performer and has a much longer life expectancy. Andrei ------------------------------------------------------------------------ *From: *"Eneko Lacunza" <elacunza@xxxxxxxxx> *To: *"J-P Methot" <jpmethot@xxxxxxxxxx>, "Christian Balzer" <chibi@xxxxxxx>, ceph-users@xxxxxxxxxxxxxx *Sent: *Tuesday, 21 April, 2015 8:18:20 AM *Subject: *Re: Possible improvements for a slow write speed (excluding independent SSD journals) Hi, I'm just writing to you to stress out what others have already said, because it is very important that you take it very seriously. On 20/04/15 19:17, J-P Methot wrote: > On 4/20/2015 11:01 AM, Christian Balzer wrote: >> >>> This is similar to another thread running right now, but since our >>> current setup is completely different from the one described in the >>> other thread, I thought it may be better to start a new one. >>> >>> We are running Ceph Firefly 0.80.8 (soon to be upgraded to 0.80.9). We >>> have 6 OSD hosts with 16 OSD each (so a total of 96 OSDs). Each OSD >>> is a >>> Samsung SSD 840 EVO on which I can reach write speeds of roughly 400 >>> MB/sec, plugged in jbod on a controller that can theoretically transfer >>> at 6gb/sec. All of that is linked to openstack compute nodes on two >>> bonded 10gbps links (so a max transfer rate of 20 gbps). >>> >> I sure as hell hope you're not planning to write all that much to this >> cluster. >> But then again you're worried about write speed, so I guess you do. >> Those _consumer_ SSDs will be dropping like flies, there are a number of >> threads about them here. >> >> They also might be of the kind that don't play well with O_DSYNC, I >> can't >> recall for sure right now, check the archives. >> Consumer SSDs universally tend to slow down quite a bit when not >> TRIM'ed >> and/or subjected to prolonged writes, like those generated by a >> benchmark. > I see, yes it looks like these SSDs are not the best for the job. We > will not change them for now, but if they start failing, we will > replace them with better ones. I tried to put a Samsung 840 Pro 256GB in a ceph setup. It is supposed to be quite better than the EVO right? It was total crap. No "not the best for the job". TOTAL CRAP. :) It can't give any useful write performance for a Ceph OSD. Spec sheet numbers don't matter for this, they don't work for ceph OSD, period. And yes, the drive is fine and works like a charm in workstation workloads. I suggest you at least get some intel S3700/S3610 and use them for the journal of those samsung drives, I think that could help performance a lot. Cheers Eneko -- Zuzendari Teknikoa / Director Técnico Binovo IT Human Project, S.L. Telf. 943575997 943493611 Astigarraga bidea 2, planta 6 dcha., ofi. 3-2; 20180 Oiartzun (Gipuzkoa) www.binovo.es _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com