Re: List of SSDs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

> We started having high wait times on the M600s so we got 6 S3610s, 6 M500dcs, and 6 500 GB M600s (they have the SLC to MLC conversion that we thought might work better). 

Is it working better as you were expecting?

> We have graphite gathering stats on the admin sockets for Ceph and the standard system stats. 

Very cool!

> We weighted the drives so they had the same byte usage and let them run for a week or so, then made them the same percentage of used space, let them run a couple of weeks, then set them to 80% full and let them run a couple of weeks. 

Almost exactly same *byte* usage? I'm pretty interesting to how you realized that.

> We compared IOPS and IO time of the drives to get our comparison. 

What is your feeling about the comparison?

> This was done on live production clusters and not synthetic benchmarks. 

How large is your production the Ceph cluster?

Rgds,
Shinobu

>
> Hello,
>
> On Wed, 24 Feb 2016 22:56:15 -0700 Robert LeBlanc wrote:
>
> > We are moving to the Intel S3610, from our testing it is a good balance
> > between price, performance and longevity. But as with all things, do your
> > testing ahead of time. This will be our third model of SSDs for our
> > cluster. The S3500s didn't have enough life and performance tapers off
> > add it gets full. The Micron M600s looked good with the Sebastian journal
> > tests, but once in use for a while go downhill pretty bad. We also tested
> > Micron M500dc drives and they were on par with the S3610s and are more
> > expensive and are closer to EoL. The S3700s didn't have quite the same
> > performance as the S3610s, but they will last forever and are very stable
> > in terms of performance and have the best power loss protection.
> >
> That's interesting, how did you come to that conclusion and how did test
> it?
> Also which models did you compare?
>
>
> > Short answer is test them for yourself to make sure they will work. You
> > are pretty safe with the Intel S3xxx drives. The Micron M500dc is also
> > pretty safe based on my experience. It had also been mentioned that
> > someone has had good experience with a Samsung DC Pro (has to have both
> > DC and Pro in the name), but we weren't able to get any quick enough to
> > test so I can't vouch for them.
> >
> I have some Samsung DC Pro EVOs in production (non-Ceph, see that
> non-barrier thread).
> They do have issues with LSI occasionally, haven't gotten around to make
> that FS non-barrier to see if it fixes things.
>
> The EVOs are also similar to the Intel DC S3500s, meaning that they are
> not really suitable for Ceph due to their endurance.
>
> Never tested the "real" DC Pro ones, but they are likely to be OK.
>
> Christian
>
> > Sent from a mobile device, please excuse any typos.
> > On Feb 24, 2016 6:37 PM, "Shinobu Kinjo" <skinjo@xxxxxxxxxx> wrote:
> >
> > > Hello,
> > >
> > > There has been a bunch of discussion about using SSD.
> > > Does anyone have any list of SSDs describing which SSD is highly
> > > recommended, which SSD is not.
> > >
> > > Rgds,
> > > Shinobu
> > > _______________________________________________
> > > ceph-users mailing list
> > > ceph-users@xxxxxxxxxxxxxx
> > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> > >
>
>
> --
> Christian Balzer        Network/Systems Engineer
> chibi@xxxxxxx           Global OnLine Japan/Rakuten Communications
> http://www.gol.com/
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux