Re: Tip of the week: don't use Intel 530 SSD's for journals

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Mark, if it is not too much trouble for you, could you please check the wear level and the amount of writes done on your Intel 520 ssds? It would be useful to check if they are at similar level of wear/writes as mine.

I am feeling a bit sceptical that my ssds have done 6 times the guaranteed amount of writes and are still at 95% health level. Looks very odd / incorrect to me.

Cheers

--
Andrei Mikhailovsky
Director
Arhont Information Security

Web: http://www.arhont.com
http://www.wi-foo.com
Tel: +44 (0)870 4431337
Fax: +44 (0)208 429 3111
PGP: Key ID - 0x2B3438DE
PGP: Server - keyserver.pgp.com

DISCLAIMER

The information contained in this email is intended only for the use of the person(s) to whom it is addressed and may be confidential or contain legally privileged information. If you are not the intended recipient you are hereby notified that any perusal, use, distribution, copying or disclosure is strictly prohibited. If you have received this email in error please immediately advise us by return email at andrei@xxxxxxxxxx and delete and purge the email and any attachments without making a copy.



From: "Mark Nelson" <mark.nelson@xxxxxxxxxxx>
To: "Andrei Mikhailovsky" <andrei@xxxxxxxxxx>, "Michael Kuriger" <mk7193@xxxxxx>
Cc: "Mark Nelson" <mark.nelson@xxxxxxxxxxx>, ceph-users@xxxxxxxxxxxxxx
Sent: Tuesday, 25 November, 2014 9:30:18 PM
Subject: Re: Tip of the week: don't use Intel 530 SSD's for journals

FWIW, I've got Intel 520s in one of our test nodes at Inktank that has a
fair amount of data thrown at it and we haven't lost a drive in 2 years.
  Having said that, I'd use higher write endurance drives in production,
especially with how much cheaper they are getting these days.

Mark

On 11/25/2014 03:25 PM, Andrei Mikhailovsky wrote:
> Thanks for the advise!
>
> I've checked a couple of my Intel 520s which I use for the osd journals
> and have been using them for almost 2 years now.
> I do not have a great deal of load though. Only have about 60vms or so
> which have a general usage.
>
> Disk 1:
> ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE
> UPDATED  WHEN_FAILED RAW_VALUE
> 233 Media_Wearout_Indicator 0x0032   096   096   000    Old_age
> Always       -       0
> 225 Host_Writes_32MiB       0x0032   100   100   000    Old_age
> Always       -       5754781
>
> Disk 2:
> ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE
> UPDATED  WHEN_FAILED RAW_VALUE
> 233 Media_Wearout_Indicator 0x0032   095   095   000    Old_age
> Always       -       0
> 225 Host_Writes_32MiB       0x0032   100   100   000    Old_age
> Always       -       5697133
>
> So, from what I can see, I still have 95 and 96 percent left on the
> disks and they have done around 190 TeraBytes, which seems like a lot
> for a consumer grade disk. Or maybe I am reading the data wrongly?
>
> Thanks
>
>
> Andrei
>
> ------------------------------------------------------------------------
>
>     *From: *"Michael Kuriger" <mk7193@xxxxxx>
>     *To: *"Mark Nelson" <mark.nelson@xxxxxxxxxxx>, ceph-users@xxxxxxxxxxxxxx
>     *Sent: *Tuesday, 25 November, 2014 5:12:20 PM
>     *Subject: *Re: [ceph-users] Tip of the week: don't use Intel 530
>     SSD's for journals
>
>     My cluster is actually very fast without SSD drives.  Thanks for the
>     advice!
>
>     Michael Kuriger
>     mk7193@xxxxxx
>     818-649-7235
>
>     MikeKuriger (IM)
>
>
>
>
>     On 11/25/14, 7:49 AM, "Mark Nelson" <mark.nelson@xxxxxxxxxxx> wrote:
>
>      >On 11/25/2014 09:41 AM, Erik Logtenberg wrote:
>      >> If you are like me, you have the journals for your OSD's with
>     rotating
>      >> media stored separately on an SSD. If you are even more like me, you
>      >> happen to use Intel 530 SSD's in some of your hosts. If so,
>     please do
>      >> check your S.M.A.R.T. statistics regularly, because these SSD's
>     really
>      >> can't cope with Ceph.
>      >>
>      >> Check out the media-wear graphs for the two Intel 530's in my
>     cluster.
>      >> As soon as those declining lines get down to 30% or so, they
>     need to be
>      >> replaced. That means less than half a year between purchase and
>      >> end-of-life :(
>      >>
>      >> Tip of the week, keep an eye on those statistics, don't let a
>     failing
>      >> SSD surprise you.
>      >
>      >This is really good advice, and it's not just the Intel 530s.  Most
>      >consumer grade SSDs have pretty low write endurance.  If you
>     mostly are
>      >doing reads from your cluster you may be OK, but if you have even
>      >moderately high write workloads and you care about avoiding OSD
>     downtime
>      >(which in a production cluster is pretty important though not usually
>      >100% critical), get high write endurance SSDs.
>      >
>      >Mark
>      >
>      >>
>      >> Erik.
>      >>
>      >>
>      >>
>      >> _______________________________________________
>      >> ceph-users mailing list
>      >> ceph-users@xxxxxxxxxxxxxx
>      >>
>      >>https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.ceph.com_listin
>      >>fo.cgi_ceph-2Dusers-2Dceph.com&d=AAICAg&c=lXkdEK1PC7UK9oKA-BBSI8p1AamzLOS
>      >>ncm6Vfn0C_UQ&r=CSYA9OS6Qd7fQySI2LDvlQ&m=xAjtZHPapVvnusxPYRk6BsgVfaL1ZLDaT
>      >>ojJmuDFDpQ&s=F0CBA8T3LuTIhofIV4LGk-6CgC8KsPAu-7JgJ4jRm3I&e=
>      >>
>      >
>      >_______________________________________________
>      >ceph-users mailing list
>      >ceph-users@xxxxxxxxxxxxxx
>      >https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.ceph.com_listinf
>      >o.cgi_ceph-2Dusers-2Dceph.com&d=AAICAg&c=lXkdEK1PC7UK9oKA-BBSI8p1AamzLOSnc
>      >m6Vfn0C_UQ&r=CSYA9OS6Qd7fQySI2LDvlQ&m=xAjtZHPapVvnusxPYRk6BsgVfaL1ZLDaTojJ
>      >muDFDpQ&s=F0CBA8T3LuTIhofIV4LGk-6CgC8KsPAu-7JgJ4jRm3I&e=
>
>     _______________________________________________
>     ceph-users mailing list
>     ceph-users@xxxxxxxxxxxxxx
>     http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux