Fwd: List of SSDs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




I've done a bit of testing with the Intel units: S3600, S3700, S3710, and P3700.  I've also tested the Samsung 850 Pro, 845DC Pro, and SM863.

All of my testing was "worst case IOPS" as described here:

http://www.anandtech.com/show/8319/samsung-ssd-845dc-evopro-preview-exploring-worstcase-iops/6

This is all synthetic, with fio.  I'd fill the drive four times, and then test IOPs with 4k blocks (QD32/4 threads).  The results that I saw on my tests were not significantly different than the numbers that Anandtech have on those same drives.  The Samsung 845DC Pro had sufficient endurance and outperformed the Intel offering, so that's what I went with when putting together a POC cluster a little over a year ago.

Much to my chagrin, Samsung EOL'd this drive last summer, right when I began to procure bits for production.  They replaced it with their SM863.  Advertised performance and endurance is *less* than the predecessor, the 845DC Pro.  The former being optimized for mixed workloads and the latter being optimized for write heavy workloads.  Digging through some data sheets for this drive I found something that said, "endurance and write performance can be increased with over-provisioning"

After some discussions with product management at Samsung, I discovered that the 845DC Pro is 28% over-provisioned and that the SM863 is 12% over-provisioned by default (the 850 Pro and most consumer drives are 6%).  As such, I tested with and without over-provisioning, which I adjusted with hdparm as the Samsung tools aren't really up to date for their new drives.  The result was nearly a 300x improvement in worst case IOPS.  My tests were something like this:

SM863 Pro (default over-provisioning) ~7k IOPS per thread (4 threads, QD32)
Intel S3710 ~10k IOPS per thread
845DC Pro ~12k IOPS per thread
SM863 (28% over-provisioning) ~18k IOPS per thread

I also tested the 850 Pro with similar over-provisioning and saw significantly improved performance.  If anyone cares enough, I can dig up some of these charts to show the results.  The SM863 gave the most consistent performance as well.  The S3710 had easily identified garbage collection events.

I'm seeing the S3710s at ~$1.20/GB and the SM863 around $.63/GB.  As such, I'm buying quite a lot of the latter.  I've not had them deployed for very long, so I can't attest to anything beyond my synthetic benchmarks.  I'm using the LSI 3008 based HBA as well and I've had to use updated firmware and kernel module for it.  I haven't checked the kernel that comes with EL7.2, but 7.1 still had problems with the included driver.

As an aside, I'm using the P3700 from Intel as a journal for spinning nodes, and they're working very well.  Latency is consistently 1/10th of any SATA SSD that I've tested.  I'm keen to test some of the larger 2.5" NVME SSDs coming to market for use as an OSD.  Those are hitting around $1.20.  I'd also like to try using an m.2 SSD for journals.  Kingston announced something called an e1000 that is a host card for these that adds power-loss protection.


-H


On Thu, Feb 25, 2016 at 8:20 PM, Christian Balzer <chibi@xxxxxxx> wrote:
I have some Samsung DC Pro EVOs in production (non-Ceph, see that
non-barrier thread).
They do have issues with LSI occasionally, haven't gotten around to make
that FS non-barrier to see if it fixes things.

The EVOs are also similar to the Intel DC S3500s, meaning that they are
not really suitable for Ceph due to their endurance.

Never tested the "real" DC Pro ones, but they are likely to be OK.

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux