Re: SSD test results with Plextor M6 Pro, HyperX Fury, Kingston V300, ADATA SP90

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Jan,

I am building two new clusters for testing. I been reading your messages
on the mailing list for a while now and want to thank you for your support.

I can redo all the numbers, but is your question to run all the test
again with [hdparm -W 1 /dev/sdc]? Please tell me what else you would
like to see test, commands?

My experience was that enabling disk cache causes about a 45%
performance drop, iops=25690 vs iops=46185

I am going to test DT01ACA300 vs WD1003FBYZ disks with SV300S37A ssd's
in my other two three node ceph clusters.

What is your advice on making hdparm and possible scheduler (noop)
changes persistent (cmd in rc.local or special udev rules, examples?)

Kind regards,

Jelle de Jong


On 23/06/15 12:41, Jan Schermer wrote:
> Those are interesting numbers - can you rerun the test with write cache enabled this time? I wonder how much your drop will be…
> 
> thanks
> 
> Jan
> 
>> On 18 Jun 2015, at 17:48, Jelle de Jong <jelledejong@xxxxxxxxxxxxx> wrote:
>>
>> Hello everybody,
>>
>> I thought I would share the benchmarks from these four ssd's I tested
>> (see attachment)
>>
>> I do still have some question:
>>
>> #1     *    Data Set Management TRIM supported (limit 1 block)
>>    vs
>>       *    Data Set Management TRIM supported (limit 8 blocks)
>> and how this effects Ceph and also how can I test if TRIM is actually
>> working and not corruption data.
>>
>> #2 are there other things I should test to compare ssd's for Ceph Journals
>>
>> #3 are the power loss security mechanisms on SSD relevant in Ceph when
>> configured in a way that a full node can fully die and that a power loss
>> of all nodes at the same time should not be possible (or has an extreme
>> low probability)
>>
>> #4 how to benchmarks the OSD (disk+ssd-journal) combination so I can
>> compare them.
>>
>> I got some other benchmarks question, but I will make an separate mail
>> for them.
>>
>> Kind regards,
>>
>> Jelle de Jong
>> <setup-ceph01-ceph-ssd-benchmark.txt>_______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux