Re: List of SSDs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 25 Feb 2016 16:41:59 -0500 (EST) Shinobu Kinjo wrote:

> > Just beware of HBA compatibility, even in passthrough mode some crappy
> > firmwares can try and be smart about what you can do (LSI-Avago, I'm
> > looking your way for crippling TRIM, seriously WTH).
> 
> This is very good to know.
> Can anybody elaborate on this a bit more?
> 
I suggest you read the entire "XFS and nobarriers on Intel SSD" thread on
this very ML, the first ML hit when googling "Ceph LSI SSD".

Note I saw problems with un-patched S3610s (never with 3700s) even when
connected to the onboard Intel SATA controller. 
After updating their firmware this was fixed, but for good measure I also
went no-barrier and upgraded to the latest Debian kernel (4.3), for all
the good that will do.

Christian
> Rgds,
> Shinobu
> 
> ----- Original Message -----
> From: "Jan Schermer" <jan@xxxxxxxxxxx>
> To: "Nick Fisk" <nick@xxxxxxxxxx>
> Cc: "Robert LeBlanc" <robert@xxxxxxxxxxxxx>, "Shinobu Kinjo"
> <skinjo@xxxxxxxxxx>, ceph-users@xxxxxxxxxxxxxx Sent: Thursday, February
> 25, 2016 11:10:41 PM Subject: Re:  List of SSDs
> 
> We are very happy with S3610s in our cluster.
> We had to flash a new firmware because of latency spikes (NCQ-related),
> but had zero problems after that... Just beware of HBA compatibility,
> even in passthrough mode some crappy firmwares can try and be smart
> about what you can do (LSI-Avago, I'm looking your way for crippling
> TRIM, seriously WTH).
> 
> Jan
> 
> 
> > On 25 Feb 2016, at 14:48, Nick Fisk <nick@xxxxxxxxxx> wrote:
> > 
> > There’s two factors really
> >  
> > 1.       Suitability for use in ceph
> > 2.       Number of people using them
> >  
> > For #1, there are a number of people using various different drives,
> > so lots of options. The blog articled linked is a good place to start. 
> > For #2 and I think this is quite important. Lots of people use the
> > S3xx’s intel drives. This means any problems you face will likely have
> > a lot of input from other people. Also you are less likely to face
> > surprises, as most usage cases have already been covered. From:
> > ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx
> > <mailto:ceph-users-bounces@xxxxxxxxxxxxxx>] On Behalf Of Robert
> > LeBlanc Sent: 25 February 2016 05:56 To: Shinobu Kinjo
> > <skinjo@xxxxxxxxxx <mailto:skinjo@xxxxxxxxxx>> Cc: ceph-users
> > <ceph-users@xxxxxxxxxxxxxx <mailto:ceph-users@xxxxxxxxxxxxxx>>
> > Subject: Re:  List of SSDs We are moving to the Intel
> > S3610, from our testing it is a good balance between price,
> > performance and longevity. But as with all things, do your testing
> > ahead of time. This will be our third model of SSDs for our cluster.
> > The S3500s didn't have enough life and performance tapers off add it
> > gets full. The Micron M600s looked good with the Sebastian journal
> > tests, but once in use for a while go downhill pretty bad. We also
> > tested Micron M500dc drives and they were on par with the S3610s and
> > are more expensive and are closer to EoL. The S3700s didn't have quite
> > the same performance as the S3610s, but they will last forever and are
> > very stable in terms of performance and have the best power loss
> > protection. 
> > 
> > Short answer is test them for yourself to make sure they will work.
> > You are pretty safe with the Intel S3xxx drives. The Micron M500dc is
> > also pretty safe based on my experience. It had also been mentioned
> > that someone has had good experience with a Samsung DC Pro (has to
> > have both DC and Pro in the name), but we weren't able to get any
> > quick enough to test so I can't vouch for them. 
> > 
> > Sent from a mobile device, please excuse any typos.
> > 
> > On Feb 24, 2016 6:37 PM, "Shinobu Kinjo" <skinjo@xxxxxxxxxx
> > <mailto:skinjo@xxxxxxxxxx>> wrote: Hello,
> > 
> > There has been a bunch of discussion about using SSD.
> > Does anyone have any list of SSDs describing which SSD is highly
> > recommended, which SSD is not.
> > 
> > Rgds,
> > Shinobu
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@xxxxxxxxxxxxxx <mailto:ceph-users@xxxxxxxxxxxxxx>
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> > <http://xo4t.mj.am/link/xo4t/x0557gim2n74/1/sbo9Blye4QJEav_CN9NrlA/aHR0cDovL2xpc3RzLmNlcGguY29tL2xpc3RpbmZvLmNnaS9jZXBoLXVzZXJzLWNlcGguY29t>
> > _______________________________________________ ceph-users mailing list
> > ceph-users@xxxxxxxxxxxxxx <mailto:ceph-users@xxxxxxxxxxxxxx>
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> > <http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


-- 
Christian Balzer        Network/Systems Engineer                
chibi@xxxxxxx   	Global OnLine Japan/Rakuten Communications
http://www.gol.com/
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux