Re: performance in a small cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Robert,

probably the following tool provides deeper insights whats happening on your osds:

https://github.com/scoopex/ceph/blob/master/src/tools/histogram_dump.py
https://github.com/ceph/ceph/pull/28244
https://user-images.githubusercontent.com/288876/58368661-410afa00-7ef0-11e9-9aca-b09d974024a7.png

Monitoring virtual machine/client behavior in a comparable way would also be a good thing.

@All: Do you know suitable tools?

  * kernel rbd
  * rbd-nbd
  * linux native (i.e. if your want to analyze from inside a kvm or xen vm)

(the output of "iostat -N -d -x -t -m 10" seems not to be enough for detailed analytics)

Regards
Marc

Am 24.05.19 um 13:22 schrieb Robert Sander:
> Hi,
>
> we have a small cluster at a customer's site with three nodes and 4 SSD-OSDs each.
> Connected with 10G the system is supposed to perform well.
>
> rados bench shows ~450MB/s write and ~950MB/s read speeds with 4MB objects but only 20MB/s write and 95MB/s read with 4KB objects.
>
> This is a little bit disappointing as the 4K performance is also seen in KVM VMs using RBD.
>
> Is there anything we can do to improve performance with small objects / block sizes?
>
> Jumbo frames have already been enabled.
>
> 4MB objects write:
>
> Total time run:         30.218930
> Total writes made:      3391
> Write size:             4194304
> Object size:            4194304
> Bandwidth (MB/sec):     448.858
> Stddev Bandwidth:       63.5044
> Max bandwidth (MB/sec): 552
> Min bandwidth (MB/sec): 320
> Average IOPS:           112
> Stddev IOPS:            15
> Max IOPS:               138
> Min IOPS:               80
> Average Latency(s):     0.142475
> Stddev Latency(s):      0.0990132
> Max latency(s):         0.814715
> Min latency(s):         0.0308732
>
> 4MB objects rand read:
>
> Total time run:       30.169312
> Total reads made:     7223
> Read size:            4194304
> Object size:          4194304
> Bandwidth (MB/sec):   957.662
> Average IOPS:         239
> Stddev IOPS:          23
> Max IOPS:             272
> Min IOPS:             175
> Average Latency(s):   0.0653696
> Max latency(s):       0.517275
> Min latency(s):       0.00201978
>
> 4K objects write:
>
> Total time run:         30.002628
> Total writes made:      165404
> Write size:             4096
> Object size:            4096
> Bandwidth (MB/sec):     21.5351
> Stddev Bandwidth:       2.0575
> Max bandwidth (MB/sec): 22.4727
> Min bandwidth (MB/sec): 11.0508
> Average IOPS:           5512
> Stddev IOPS:            526
> Max IOPS:               5753
> Min IOPS:               2829
> Average Latency(s):     0.00290095
> Stddev Latency(s):      0.0015036
> Max latency(s):         0.0778454
> Min latency(s):         0.00174262
>
> 4K objects read:
>
> Total time run:       30.000538
> Total reads made:     1064610
> Read size:            4096
> Object size:          4096
> Bandwidth (MB/sec):   138.619
> Average IOPS:         35486
> Stddev IOPS:          3776
> Max IOPS:             42208
> Min IOPS:             26264
> Average Latency(s):   0.000443905
> Max latency(s):       0.0123462
> Min latency(s):       0.000123081
>
>
> Regards
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux