SSD MTBF

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Sep 29, 2014 at 08:58:38AM +0000, Dan Van Der Ster wrote:
> Hi Emmanuel,
> This is interesting, because we?ve had sales guys telling us that those Samsung drives are definitely the best for a Ceph journal O_o !
> The conventional wisdom has been to use the Intel DC S3700 because of its massive durability. 
> 
> Anyway, I?m curious what do the SMART counters say on your SSDs?? are they really failing due to worn out P/E cycles or is it something else?
> 


Here are our current stats (health is the Wear_leveling_count):

hyp-prs-01
 SSD Status:   sda / 3622 hours / 8800.107 GB written / 58.311 GB/day / Health: 82 %
 SSD Status:   sdb / 3622 hours / 9949.785 GB written / 65.929 GB/day / Health: 80 %
hyp-prs-02
 SSD Status:   sda / 3620 hours / 9516.849 GB written / 63.095 GB/day / Health: 81 %
 SSD Status:   sdb / 3620 hours / 9716.926 GB written / 64.421 GB/day / Health: 80 %
hyp-prs-03
 SSD Status:   sda / 3530 hours / 9501.308 GB written / 64.598 GB/day / Health: 82 %
 SSD Status:   sdb / 3530 hours / 9494.685 GB written / 64.553 GB/day / Health: 80 %
hyp-pa2-02
 SSD Status:   sdc / 5692 hours / 11585.309 GB written / 48.848 GB/day / Health: 80 %
 SSD Status:   sdd / 5692 hours / 12771.698 GB written / 53.851 GB/day / Health: 77 %
hyp-pa2-03
 SSD Status:   sdc / 5691 hours / 12571.167 GB written / 53.014 GB/day / Health: 78 %
 SSD Status:   sdd / 5691 hours / 12882.846 GB written / 54.329 GB/day / Health: 76 %
hyp-pa2-04
 SSD Status:   sdc / 5691 hours / 12542.344 GB written / 52.893 GB/day / Health: 76 %
 SSD Status:   sdd / 5691 hours / 13534.304 GB written / 57.076 GB/day / Health: 77 %
hyp-pa3-02
 SSD Status:   sdc / 8747 hours / 30142.858 GB written / 82.705 GB/day / Health: 48 %
 SSD Status:   sdd / 8747 hours / 30737.615 GB written / 84.337 GB/day / Health: 40 %
hyp-pa3-03
 SSD Status:   sda / 8769 hours / 32669.734 GB written / 89.414 GB/day / Health: 43 %
 SSD Status:   sdb / 965 hours / 4006.301 GB written / 99.639 GB/day / Health: 92 %
hyp-pa3-04
 SSD Status:   sda / 1033 hours / 4078.292 GB written / 94.753 GB/day / Health: 91 %
 SSD Status:   sde / 49 hours / 299.994 GB written / 146.983 GB/day / Health: 99 %
quadrille
 SSD Status:   sdc / 7732 hours / 10775.406 GB written / 33.446 GB/day / Health: 80 %
 SSD Status:   sdd / 7732 hours / 10656.070 GB written / 33.076 GB/day / Health: 81 %
hora
 SSD Status:   sdc / 7734 hours / 10978.489 GB written / 34.068 GB/day / Health: 81 %
 SSD Status:   sdd / 7734 hours / 10978.754 GB written / 34.069 GB/day / Health: 81 %
mazurka
 SSD Status:   sdc / 7732 hours / 11983.782 GB written / 37.197 GB/day / Health: 80 %
 SSD Status:   sdd / 7732 hours / 11803.509 GB written / 36.637 GB/day / Health: 81 %



That was stats on last friday. This morning hyp-pa3-02:sdd died. So a bit under
40 for Wear_leveling_count. And this summer we lost 3 SSDs with nearly the same
numbers (hyp-pa3-04:* and hyp-pa3-03:sdb) :(

hyp-pa3-* is the cluster with journals on raid 1 ssds of course.



-- 
Easter-eggs                              Sp?cialiste GNU/Linux
44-46 rue de l'Ouest  -  75014 Paris  -  France -  M?tro Gait?
Phone: +33 (0) 1 43 35 00 37    -   Fax: +33 (0) 1 43 35 00 76
mailto:elacour at easter-eggs.com  -   http://www.easter-eggs.com


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux