Re: Tip of the week: don't use Intel 530 SSD's for journals

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



FWIW, I've got Intel 520s in one of our test nodes at Inktank that has a fair amount of data thrown at it and we haven't lost a drive in 2 years. Having said that, I'd use higher write endurance drives in production, especially with how much cheaper they are getting these days.

Mark

On 11/25/2014 03:25 PM, Andrei Mikhailovsky wrote:
Thanks for the advise!

I've checked a couple of my Intel 520s which I use for the osd journals
and have been using them for almost 2 years now.
I do not have a great deal of load though. Only have about 60vms or so
which have a general usage.

Disk 1:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE
UPDATED  WHEN_FAILED RAW_VALUE
233 Media_Wearout_Indicator 0x0032   096   096   000    Old_age
Always       -       0
225 Host_Writes_32MiB       0x0032   100   100   000    Old_age
Always       -       5754781

Disk 2:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE
UPDATED  WHEN_FAILED RAW_VALUE
233 Media_Wearout_Indicator 0x0032   095   095   000    Old_age
Always       -       0
225 Host_Writes_32MiB       0x0032   100   100   000    Old_age
Always       -       5697133

So, from what I can see, I still have 95 and 96 percent left on the
disks and they have done around 190 TeraBytes, which seems like a lot
for a consumer grade disk. Or maybe I am reading the data wrongly?

Thanks


Andrei

------------------------------------------------------------------------

    *From: *"Michael Kuriger" <mk7193@xxxxxx>
    *To: *"Mark Nelson" <mark.nelson@xxxxxxxxxxx>, ceph-users@xxxxxxxxxxxxxx
    *Sent: *Tuesday, 25 November, 2014 5:12:20 PM
    *Subject: *Re:  Tip of the week: don't use Intel 530
    SSD's for journals

    My cluster is actually very fast without SSD drives.  Thanks for the
    advice!

    Michael Kuriger
    mk7193@xxxxxx
    818-649-7235

    MikeKuriger (IM)




    On 11/25/14, 7:49 AM, "Mark Nelson" <mark.nelson@xxxxxxxxxxx> wrote:

     >On 11/25/2014 09:41 AM, Erik Logtenberg wrote:
     >> If you are like me, you have the journals for your OSD's with
    rotating
     >> media stored separately on an SSD. If you are even more like me, you
     >> happen to use Intel 530 SSD's in some of your hosts. If so,
    please do
     >> check your S.M.A.R.T. statistics regularly, because these SSD's
    really
     >> can't cope with Ceph.
     >>
     >> Check out the media-wear graphs for the two Intel 530's in my
    cluster.
     >> As soon as those declining lines get down to 30% or so, they
    need to be
     >> replaced. That means less than half a year between purchase and
     >> end-of-life :(
     >>
     >> Tip of the week, keep an eye on those statistics, don't let a
    failing
     >> SSD surprise you.
     >
     >This is really good advice, and it's not just the Intel 530s.  Most
     >consumer grade SSDs have pretty low write endurance.  If you
    mostly are
     >doing reads from your cluster you may be OK, but if you have even
     >moderately high write workloads and you care about avoiding OSD
    downtime
     >(which in a production cluster is pretty important though not usually
     >100% critical), get high write endurance SSDs.
     >
     >Mark
     >
     >>
     >> Erik.
     >>
     >>
     >>
     >> _______________________________________________
     >> ceph-users mailing list
     >> ceph-users@xxxxxxxxxxxxxx
     >>
     >>https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.ceph.com_listin
     >>fo.cgi_ceph-2Dusers-2Dceph.com&d=AAICAg&c=lXkdEK1PC7UK9oKA-BBSI8p1AamzLOS
     >>ncm6Vfn0C_UQ&r=CSYA9OS6Qd7fQySI2LDvlQ&m=xAjtZHPapVvnusxPYRk6BsgVfaL1ZLDaT
     >>ojJmuDFDpQ&s=F0CBA8T3LuTIhofIV4LGk-6CgC8KsPAu-7JgJ4jRm3I&e=
     >>
     >
     >_______________________________________________
     >ceph-users mailing list
     >ceph-users@xxxxxxxxxxxxxx
     >https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.ceph.com_listinf
     >o.cgi_ceph-2Dusers-2Dceph.com&d=AAICAg&c=lXkdEK1PC7UK9oKA-BBSI8p1AamzLOSnc
     >m6Vfn0C_UQ&r=CSYA9OS6Qd7fQySI2LDvlQ&m=xAjtZHPapVvnusxPYRk6BsgVfaL1ZLDaTojJ
     >muDFDpQ&s=F0CBA8T3LuTIhofIV4LGk-6CgC8KsPAu-7JgJ4jRm3I&e=

    _______________________________________________
    ceph-users mailing list
    ceph-users@xxxxxxxxxxxxxx
    http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux