Re: Intel S3710 400GB and Samsung PM863 480GB fio results

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,


On 12/23/2015 04:38 PM, Lionel Bouton wrote:
> Le 23/12/2015 16:18, Mart van Santen a écrit :
>> Hi all,
>>
>>
>> On 12/22/2015 01:55 PM, Wido den Hollander wrote:
>>> On 22-12-15 13:43, Andrei Mikhailovsky wrote:
>>>> Hello guys,
>>>>
>>>> Was wondering if anyone has done testing on Samsung PM863 120 GB version to see how it performs? IMHO the 480GB version seems like a waste for the journal as you only need to have a small disk size to fit 3-4 osd journals. Unless you get a far greater durability.
>>>>
>>> In that case I would look at the SM836 from Samsung. They are sold as
>>> write-intensive SSDs.
>>>
>>> Wido
>>>
>> Today I received a small batch of SM863 (1.9TBs) disks. So maybe these
>> testresults are helpfull for making a decision
>> This is on an IBM X3550M4 with a MegaRaid SAS card (so not in jbod
>> mode). Unfortunally I have no suitable JBOD card available at my test
>> server so I'm stuck with the "RAID" layer in the HBA
>>
>>
>>
>> disabled drive cache, disabled controller cache
>> ---------------------------------------------------------------
>>
>>
>> 1 job
>> -----------
>> Run status group 0 (all jobs):
>>   WRITE: io=906536KB, aggrb=15108KB/s, minb=15108KB/s, maxb=15108KB/s,
>> mint=60001msec, maxt=60001msec
>>
>> Disk stats (read/write):
>>   sdd: ios=91/452978, merge=0/0, ticks=12/39032, in_queue=39016, util=65.04%
> Either the MegaRaid SAS card is the bottleneck or SM863 1.9TB are 8x
> slower than PM863 480GB on this particular test which is a bit
> surprising: it would make the SM863 one of the slowest (or even the
> slowest) DC SSD usable as Ceph journals.
> Do you have any other SSD (if possible one of the models or one similar
> to the ones listed on
> http://www.sebastien-han.fr/blog/2014/10/10/ceph-how-to-test-if-your-ssd-is-suitable-as-a-journal-device/
> which give more than 15MB/s with one job) connected to the same card
> model you could test for comparison?

The performance is strange. I've did some more tests, and it fluctuates
a bit (which is strange, as the system is about idle),
but I get between the 15MB to 30MB/s currently (1 job, 4k). But notably,
I get about the same results (also same fluctuation) with a S3700 (100GB)

I've plugged the SM863 in a different system, with an other SAS card,
which gave a better result:

(Symbios Logic SAS2308, but enclosure is a 3gpbs system)

Run status group 0 (all jobs):
  WRITE: io=2713.7MB, aggrb=46311KB/s, minb=46311KB/s, maxb=46311KB/s,
mint=60001msec, maxt=60001msec

Disk stats (read/write):
  sdb: ios=9/694054, merge=0/0, ticks=0/49284, in_queue=49252, util=82.09%

~ 46 MB/s


So, maybe you are right and is the HBA the bottleneck (LSI Logic /
Symbios Logic MegaRAID SAS 2108). Under all cirumstances, I do not get
close to the numbers of the PM863 quoted by Sebastien. But his site does
not state what kind of HBA he is using..


Regards,

Mart




>
> Lionel

-- 
Mart van Santen
Greenhost
E: mart@xxxxxxxxxxxx
T: +31 20 4890444
W: https://greenhost.nl

A PGP signature can be attached to this e-mail,
you need PGP software to verify it. 
My public key is available in keyserver(s)
see: http://tinyurl.com/openpgp-manual

PGP Fingerprint: CA85 EB11 2B70 042D AF66  B29A 6437 01A1 10A3 D3A5


Attachment: signature.asc
Description: OpenPGP digital signature

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux