Re: SSD journals killed by VMs generating 500 IOPs (4kB) non-stop for a month, seemingly because of a syslog-ng bug

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

There have been numerous on the mailing list of the Samsung EVO and
Pros failing far before their expected wear. This is most likely due
to the 'uncommon' workload of Ceph and the controllers of those drives
are not really designed to handle the continuous direct sync writes
that Ceph does. Because of this they can fail without warning
(controller failure rather than MLC failure).

We have tested the performance of the Micron M600 drives and as long
as you don't fill them up, they perform like the Intel line. I just
don't know if they will die prematurely like a lot of the Samsungs
have. We have a load of Intel S3500s that we can put in if they start
failing so I'm not too worried at the moment.

The only drives that I've heard really good things about are the Intel
S3700 (and I suspect the S3600 and S3500s could be used as well if you
take some additional precautions) and the Samsung DC PROs (has to have
the DC and PRO in the name). The Micron M600s are a good value and
have decent performance and I plan on keeping the list informed about
them as time goes on.

With a cluster that is idle as your's it may not make that much of a
difference. Where we are pushing 1,000s of IOPs all the time, we have
a challenge if the SSDs can't take the load.
-----BEGIN PGP SIGNATURE-----
Version: Mailvelope v1.2.3
Comment: https://www.mailvelope.com

wsFcBAEBCAAQBQJWUi06CRDmVDuy+mK58QAAJ/0P/36VLrKZqFu6ZnnSmjKn
m/0WoC7FbfIz3+B64q2Ftl9TlPR5PqtWm5PkBaj+VORYi8Rjbk3WVU9aipJC
Z3ok8G+JsVOoq5SEqQIPpcT10F9AHB54hIlD9WOSJInzq+ifUvRI9ZY3fkHU
y4zYDOdcDqeP1A1J1LSxMEwjX4FG8r8iE3jOfvJWf6K6ELnG3/Jn/vvKX+fM
AfdboCSAGkhkWVa+WBJg/SGx0fStElxgkIEaJUaKlWY+hSJiZcfT1WOEwZs9
3qbvgb5y8aEO1imLUAwB5gzyNG+JxWMUAby52yHF+Y3FQHeKQpcs9sGIgvct
ih6IQBeceU2BmLTUBugLdB5nLgBmPI/dCyw24JqwhynIqjkM8oX1i9V6x/F7
r+CsB/yabn6lTQn6yIIpzd3+jzMbZR6F4YiA7erpQjzs1grx4J/SQE2jDfoy
D9fAtB7sX8UFDJ6GyDnYeliiOpbIVK0sw4YKjT1Szu352dZ5rYioRvdDCJ8z
f799B+YsGcHMye0mi58Y1NLtTXsWIas2YFH05sR62UmqdG1ejKoqj5Af8j6x
3J3UQjSbB7V52uzFYxC1sd63XGfau3Oku59+vvb9MxmrFwEMtCqGlS/sKYP0
NhLG5wmV6vEfsgyzpUaTio+Fws0juj8GeUL6a9Yp1JaNGxN2HIyCFwD8FzTp
5oBU
=scws
-----END PGP SIGNATURE-----
----------------
Robert LeBlanc
PGP Fingerprint 79A2 9CA4 6CC4 45DD A904  C70E E654 3BB2 FA62 B9F1


On Sun, Nov 22, 2015 at 10:40 AM, Alex Moore <alex@xxxxxxxxxx> wrote:
> I just had 2 of the 3 SSD journals in my small 3-node cluster fail within 24
> hours of each other (not fun, although thanks to a replication factor of 3x,
> at least I didn't lose any data). The journals were 128 GB Samsung 850 Pros.
> However I have determined that it wasn't really their fault...
>
> This is a small Ceph cluster running just a handful of relatively idle Qemu
> VMs using librbd for storage, and I had originally estimated that based on
> my low expected volume of write IO the Samsung 850 Pro journals would last
> at least 5 years (which would have been plenty). I still think that estimate
> was correct, but the reason they died prematurely (in reality they lasted 15
> months) seems to have been that a number of my VMs had been hammering their
> disks continuously for almost a month, and I only noticed retrospectively
> after the journals had died. I tracked it back to some sort of bug in
> syslog-ng: the affected VMs took an update to syslog-ng on October 24th, and
> then ever since the following daily logrotate early on the 25th, the syslog
> daemons were together generating about 500 IOPs of 4kB writes continuously
> for the next 4 weeks until the journals then failed.
>
> As a result, I reckon that taking write amplification into account the SSDs
> must have each written just over 1PB over that period - way more than they
> are supposed to be able to handle - so I can't blame the SSDs.
>
> I do have graphs tracking various metrics for the Ceph cluster, including
> IOPs, latency, and read/write throughput - which is how I worked out what
> happened afterwards - but unfortunately I didn't have any alerting set up to
> warn me when there were anomalies in the graphs, and I wasn't proactively
> looking at the graphs on a regular basis.
>
> So I think there is a lesson to be learned here... even if you have
> correctly spec'd your SSD journals in terms of endurance for the anticipated
> level of write activity in a cluster, it's still important to keep an eye on
> ensuring that the write activity matches expectations, as it's quite easy
> for a misbehaving VM to severely drain the life expectancy of SSDs by
> generating 4k write IOs as quickly as it can for a long period of time!
>
> I have now replaced all 3 journals with 240 GB Samsung SM863 SSDs, which
> were only about twice the cost of the smaller 850 Pros. And I'm already
> noticing a massive performance improvement (reduction in write latency, and
> higher IOPs). So I'm not too upset about having unnecessarily killed the 850
> Pros. But I thought it was worth sharing the experience...
>
> FWIW the OSDs themselves are on 1TB Samsung 840 Evos, which I have been
> happy with so far (they've been going for about 18 months at this stage).
>
> Alex
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux