On 17-10-11 09:50 AM, Josef Zelenka wrote:
Hello everyone,
lately, we've had issues with buying SSDs that we use for
journaling(Kingston stopped making them) - Kingston V300 - so we decided to
start using a different model and started researching which one would be the
best price/value for us. We compared five models, to check if they are
compatible with our needs - SSDNow v300, HyperX Fury,SSDNOw KC400, SSDNow
UV400 and SSDNow A400. the best one is still the V300, with the highest iops
of 59 001. Second best and still useable was the HyperX Fury with 45000
iops. The other three had terrible results, the max iops we got were around
13 000 with the dsync and direct flags. We also tested Samsung SSDs(the EVO
series) and we got similarly bad results. To get to the root of my question
- i am pretty sure we are not the only ones affected by the v300's death. Is
there anyone else out there with some benchmarking data/knowledge about some
good price/performance SSDs for ceph journaling? I can also share the
complete benchmarking data my coworker made, if someone is interested.
Never, absolutely never pick consumer-grade SSDs for Ceph cluster, and in
particular - never pick a drive with low TBW for journal. Ceph is going to
kill it within a few months. Besides, consumer-grade drives are not
optimized for Ceph-like/enterprise workloads, resulting in weird performance
characteristics, like tens of thousands of IOPS for a first few seconds,
then dropping to 1K IOPS (typical for drives with TLC NAND and SLC NAND
cache), or performing reasonably till some write queue depth is hit, then
degrading badly (underperforming controller), or killing your OSD journals
on power failure (no BBU or capacitors to power the drive while flushing
when PSU goes down).
You may want to look at this:
https://www.sebastien-han.fr/blog/2014/10/10/ceph-how-to-test-if-your-ssd-is-suitable-as-a-journal-device/
--
Piotr Dałek
piotr.dalek@xxxxxxxxxxxx
https://www.ovh.com/us/
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com