Re: dd testing from within the VM

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Ken,

wow thats quiet worst. That means you can not use this cluster like that.

How does your ceph.conf look like ?

How looks ceph -s ?


-- 
Mit freundlichen Gruessen / Best regards

Oliver Dzombic
IP-Interactive

mailto:info@xxxxxxxxxxxxxxxxx

Anschrift:

IP Interactive UG ( haftungsbeschraenkt )
Zum Sonnenberg 1-3
63571 Gelnhausen

HRB 93402 beim Amtsgericht Hanau
Geschäftsführung: Oliver Dzombic

Steuer Nr.: 35 236 3622 1
UST ID: DE274086107


Am 19.05.2016 um 12:56 schrieb Ken Peng:
> Oliver,
> 
> Thanks for the info.
> We then run sysbench for random IO testing, the result is even worse
> (757 KB/s).
> each object has 3 replicas.
> Both networks are 10Gbps, I don't think there are issues with network.
> Maybe lacking of SSD cache, and miscorrect configure to the cluster are
> the reason.
> 
> ----
> 
> Extra file open flags: 0
> 128 files, 360Mb each
> 45Gb total file size
> Block size 16Kb
> Number of random requests for random IO: 0
> Read/Write ratio for combined random IO test: 1.50
> Periodic FSYNC enabled, calling fsync() each 100 requests.
> Calling fsync() at the end of test, Enabled.
> Using synchronous I/O mode
> Doing random r/w test
> Threads started!
> 
> Time limit exceeded, exiting...
> Done.
> 
> Operations performed:  8520 Read, 5680 Write, 18056 Other = 32256 Total
> Read 133.12Mb  Written 88.75Mb  Total transferred 221.88Mb  (757.33Kb/sec)
>    47.33 Requests/sec executed
> 
> Test execution summary:
>     total time:                          300.0012s
>     total number of events:              14200
>     total time taken by event execution: 21.6865
>     per-request statistics:
>          min:                                  0.02ms
>          avg:                                  1.53ms
>          max:                               1325.73ms
>          approx.  95 percentile:               1.92ms
> 
> Threads fairness:
>     events (avg/stddev):           14200.0000/0.00
>     execution time (avg/stddev):   21.6865/0.00
> 
> 
> 
> 
> On 2016/5/19 星期四 18:24, Oliver Dzombic wrote:
>> Hi Ken,
>>
>> dd is ok, but you should consider the fact that dd is a squence of
>> writing.
>>
>> So if you have random writes in your later productive usage, then this
>> test is basically only good to meassure the maximum squential write
>> performance in idle state.
>>
>> And 250 MB for 200 HDD's is quiet evil bad as a performance for a
>> sequential write.
>>
>> Sequential write of a 7200 RPM SATA HDD should be around 70-100 MB,
>> maybe more.
>>
>> So if you have 200 of them, idle, and writing a sequence, and resulting
>> in 250 MB/s. That does not look good to me.
>>
>> So eighter your network is not good, or your settings are not good. Or
>> you have too high replica number or something like that.
>>
>> At least for me, 200x HDDs and each HDD deliver 1,2 MB/s writing speed
>> performance.
>>
>> I assume that your 4 GB won't be spread over all 200 HDDs. But still,
>> the result does not look like good performance.
>>
>> FIO is a nice test with different settings.
>>
>> ---
>>
>> The effect of conv=fdatasync will be only as big, as the RAM memory of
>> your test client will be.
>>
>>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux