Re: Fwd: List of SSDs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Comparing with these SSDs,

 S3710s
 S3610s
 SM863
 845DC Pro

which one is more reasonable in terms of performance, cost or whatever?
S3710s does not sound reasonable to me.

> And I had no luck at all getting the newer versions into a generic kernel
> or Debian.

So it's not always better to use newer version. Is my understanding right?
If I don't understand that properly, point it out to me. I'm pretty
serious about that.

Cheers,
Shinobu


On Fri, Mar 4, 2016 at 3:17 PM, Christian Balzer <chibi@xxxxxxx> wrote:
>
> Hello,
>
> On Mon, 29 Feb 2016 15:00:08 -0800 Heath Albritton wrote:
>
>> > Did you just do these tests or did you also do the "suitable for Ceph"
>> > song and dance, as in sync write speed?
>>
>> These were done with libaio, so async.  I can do a sync test if that
>> helps.  My goal for testing wasn't specifically suitability with ceph,
>> but overall suitability in my environment, much of which uses async
>> IO.
>>
> Fair enough.
> Sync tests would be nice, if nothing else to confirm that the Samsung DC
> level SSDs are suitable and how they compare in that respect to the Intels.
>
>>
>> >> SM863 Pro (default over-provisioning) ~7k IOPS per thread (4 threads,
>> >> QD32) Intel S3710 ~10k IOPS per thread
>> >> 845DC Pro ~12k IOPS per thread
>> >> SM863 (28% over-provisioning) ~18k IOPS per thread
>> >>
>> > Very interesting.
>> > To qualify your values up there, could you provide us with the exact
>> > models, well size of the SSD will do.
>>
>> SM863 was 960GB, I've many of these and the 1.92TB models deployed
>> 845DC Pro, 800GB
>> S3710, 800GB
>>
> Thanks, pretty much an oranges with oranges comparison then. ^o^
>
>> > Also did you test with a S3700 (I find the 3710s to be a slight
>> > regression in some ways)?
>> > And for kicks, did you try over-provisioning with an Intel SSD to see
>> > the effects there?
>>
>> These tests were performed mid-2015.  I requested an S3700, but at
>> that point, I could only get the S3710.  I didn't test the Intel with
>> increased over-provisioning.  I suspect it wouldn't have performed
>> much better as it was already over-provisioned by 28% or thereabouts.
>>
> Yeah, my curiosity was mostly if there is similar ratio at work here
> (might have made more sense for testing purposes to REDUCE the
> overprovisioning of the Intel) and where the point of diminishing returns
> is.
>
>> It's easy to guess at these sort of things.  The total capacity of
>> flash is in some power of two and the advertised capacity is some
>> power of ten.  Manufacturer's use the difference to buy themselves
>> some space for garbage collection.  So, a terabyte worth of flash is
>> 1099511627776 bytes.  800GB is 8e+11 bytes with the difference of
>> about 299GB, which is the space they've set aside for GC.
>>
> Ayup, that I was quite aware of.
>
>> Again, if there's some tests you'd like to see done, let me know.
>> It's relatively easy for me to get samples and the tests are a benefit
>> to me as much as any other.
>>
> Well, see above, diminishing returns and all.
>
>>
>> >> I'm seeing the S3710s at ~$1.20/GB and the SM863 around $.63/GB.  As
>> >> such, I'm buying quite a lot of the latter.
>> >
>> > I assume those numbers are before over-provisioning the SM863, still
>> > quite a difference indeed.
>>
>> Yes, that's correct.  Here's some current pricing:  Newegg has the
>> SM863 960GB at $565 or ~$.59/GB raw.  With 28% OP, that yields around
>> 800GB and around $.71/GB
>>
> If I'm reading the (well hidden and only in the PDF) full specs of the
> 960GB 863 correctly it has an endurance of about 3 DWPD, so the comparable
> Intel model would be the 3610s.
> At least when it comes to endurance.
> Would be interesting to see those two in comparison. ^.^
>
>
>> >> I've not had them deployed
>> >> for very long, so I can't attest to anything beyond my synthetic
>> >> benchmarks.  I'm using the LSI 3008 based HBA as well and I've had to
>> >> use updated firmware and kernel module for it.  I haven't checked the
>> >> kernel that comes with EL7.2, but 7.1 still had problems with the
>> >> included driver.
>> >>
>> > Now THIS is really interesting.
>> > As you may know several people on this ML including me have issues with
>> > LSI 3008s and SSDs, including Samsung ones.
>> >
>> > Can you provide all the details here, as in:
>> > IT or IR mode (IT I presume)
>> > Firmware version
>> > Kernel driver version
>>
>> When initially deployed about a year ago, I had problems with SSDs and
>> spinning disks.  Not sure about any problems specific to Samsung SSDs,
>> but I've been on the upgrade train.
>>
>> I think the stock kernel module is 4.x something or other and LSA, now
>> Avago has released P9 through P12 in the past year.  When I first
>> started using them, I was on the P9 firmware and kernel module, which
>> I built from the sources they supply.  At this point most of my infra
>> is on the P10 version.  I've not tested the later versions.
>>
>> Everything is IT mode where possible.
>>
> Yes, at least until kernel 4.1 the module was the 4.0 version.
> And I had no luck at all getting the newer versions into a generic kernel
> or Debian.
> And when I deployed the machines in question P8 was the latest FW from
> Supermicro.
>
> Kernel 4.4 does have the 9.x module, so I guess that's a way forward at
> least on the kernel side of things (which I think is the more likely
> culprit).
>
> Thanks,
>
> Christian
> --
> Christian Balzer        Network/Systems Engineer
> chibi@xxxxxxx           Global OnLine Japan/Rakuten Communications
> http://www.gol.com/
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



-- 
Email:
shinobu@xxxxxxxxx
GitHub:
shinobu-x
Blog:
Life with Distributed Computational System based on OpenSource
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux