Are those benchmarks okay?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello List,

i have 3 Node with size = 2 and min_size = 2 (i will change it to
size=3). Connected via 10GBit. IPerf shows about 8.8GBit/sec.
I just want to make sure everthing is okay until here. ( since i have
a rbd benchmark problem )

The Nodes have HDDs (spinning rust) and here is "ceph tell osd.* bench
-f plain":

osd.0: bench: wrote 1GiB in blocks of 4MiB in 7.23372 sec at  142MiB/sec 35 IOPS
osd.1: bench: wrote 1GiB in blocks of 4MiB in 5.59544 sec at 183MiB/sec 45 IOPS
osd.2: bench: wrote 1GiB in blocks of 4MiB in 4.07307 sec at  251MiB/sec 62 IOPS
osd.3: bench: wrote 1GiB in blocks of 4MiB in 4.5702 sec at   224MiB/sec 56 IOPS
osd.4: bench: wrote 1GiB in blocks of 4MiB in 4.51627 sec at  227MiB/sec 56 IOPS
osd.5: bench: wrote 1GiB in blocks of 4MiB in 5.13515 sec at  199MiB/sec 49 IOPS
osd.6: bench: wrote 1GiB in blocks of 4MiB in 7.00363 sec at  146MiB/sec 36 IOPS
osd.7: bench: wrote 1GiB in blocks of 4MiB in 4.3339 sec at   236MiB/sec 59 IOPS
osd.8: bench: wrote 1GiB in blocks of 4MiB in 1.52417 sec at
672MiB/sec 167 IOPS
osd.9: bench: wrote 1GiB in blocks of 4MiB in 4.54169 sec at  225MiB/sec 56 IOPS
osd.10: bench: wrote 1GiB in blocks of 4MiB in 4.33818 sec at 236MiB/sec 59 IOPS
osd.11: bench: wrote 1GiB in blocks of 4MiB in 2.09072 sec at
490MiB/sec 122 IOPS
osd.12: bench: wrote 1GiB in blocks of 4MiB in 5.68917 sec at 180MiB/sec 44 IOPS
osd.13: bench: wrote 1GiB in blocks of 4MiB in 4.07676 sec at 251MiB/sec 62 IOPS
osd.14: bench: wrote 1GiB in blocks of 4MiB in 2.41606 sec at
424MiB/sec 105 IOPS
osd.15: bench: wrote 1GiB in blocks of 4MiB in 5.25646 sec at 195MiB/sec 48 IOPS
osd.16: bench: wrote 1GiB in blocks of 4MiB in 7.46026 sec at 137MiB/sec 34 IOPS
osd.17: bench: wrote 1GiB in blocks of 4MiB in 6.8557 sec at  149MiB/sec 37 IOPS
osd.18: bench: wrote 1GiB in blocks of 4MiB in 2.46471 sec at
415MiB/sec 103 IOPS
osd.19: bench: wrote 1GiB in blocks of 4MiB in 4.57149 sec at 224MiB/sec 55 IOPS
osd.20: bench: wrote 1GiB in blocks of 4MiB in 8.0242 sec at  128MiB/sec 31 IOPS
osd.21: bench: wrote 1GiB in blocks of 4MiB in 6.48359 sec at 158MiB/sec 39 IOPS
osd.22: bench: wrote 1GiB in blocks of 4MiB in 1.87764 sec at
545MiB/sec 136 IOPS
osd.23: bench: wrote 1GiB in blocks of 4MiB in 3.97407 sec at 258MiB/sec 64 IOPS
osd.24: bench: wrote 1GiB in blocks of 4MiB in 4.50687 sec at 227MiB/sec 56 IOPS
osd.25: bench: wrote 1GiB in blocks of 4MiB in 5.08253 sec at 201MiB/sec 50 IOPS
osd.27: bench: wrote 1GiB in blocks of 4MiB in 8.01664 sec at 128MiB/sec 31 IOPS
osd.32: bench: wrote 1GiB in blocks of 4MiB in 4.32641 sec at 237MiB/sec 59 IOPS

It looks okay to me for HDDs.

Here is "rados bench -p rbdbench1  300 write --no-cleanup":
...
Total time run:         300.395750
Total writes made:      56850
Write size:             4194304
Object size:            4194304
Bandwidth (MB/sec):     757.001
Stddev Bandwidth:       106.837
Max bandwidth (MB/sec): 940
Min bandwidth (MB/sec): 112
Average IOPS:           189
Stddev IOPS:            26
Max IOPS:               235
Min IOPS:               28
Average Latency(s):     0.0845247
Stddev Latency(s):      0.10519
Max latency(s):         2.34853
Min latency(s):         0.0149522

Is this okay, too?


Here are some more infos:
- - -

root@ceph03:~# ceph osd tree
ID CLASS WEIGHT   TYPE NAME       STATUS REWEIGHT PRI-AFF
-1       60.17731 root default
-2       17.58112     host ceph01
 0   hdd  1.71089         osd.0       up  1.00000 1.00000
 8   hdd  2.67029         osd.8       up  1.00000 1.00000
11   hdd  1.59999         osd.11      up  1.00000 1.00000
12   hdd  1.59999         osd.12      up  1.00000 1.00000
14   hdd  2.79999         osd.14      up  1.00000 1.00000
18   hdd  1.59999         osd.18      up  1.00000 1.00000
22   hdd  2.79999         osd.22      up  1.00000 1.00000
23   hdd  2.79999         osd.23      up  1.00000 1.00000
-3       20.38164     host ceph02
 2   hdd  2.67029         osd.2       up  1.00000 1.00000
 3   hdd  2.00000         osd.3       up  1.00000 1.00000
 7   hdd  2.67029         osd.7       up  1.00000 1.00000
 9   hdd  2.67029         osd.9       up  1.00000 1.00000
13   hdd  2.00000         osd.13      up  1.00000 1.00000
16   hdd  1.59999         osd.16      up  1.00000 1.00000
19   hdd  2.38409         osd.19      up  1.00000 1.00000
24   hdd  2.67020         osd.24      up  1.00000 1.00000
25   hdd  1.71649         osd.25      up  1.00000 1.00000
-4       22.21455     host ceph03
 1   hdd  1.71660         osd.1       up  1.00000 1.00000
 4   hdd  2.67020         osd.4       up  1.00000 1.00000
 5   hdd  1.71660         osd.5       up  1.00000 1.00000
 6   hdd  1.71660         osd.6       up  1.00000 1.00000
10   hdd  2.67029         osd.10      up  1.00000 1.00000
15   hdd  2.00000         osd.15      up  1.00000 1.00000
17   hdd  1.62109         osd.17      up  1.00000 1.00000
20   hdd  1.71649         osd.20      up  1.00000 1.00000
21   hdd  2.00000         osd.21      up  1.00000 1.00000
27   hdd  1.71649         osd.27      up  1.00000 1.00000
32   hdd  2.67020         osd.32      up  1.00000 1.00000


root@ceph03:~# cat /etc/ceph/ceph.conf
[global]
fsid = 5436dd5d-83d4-4dc8-a93b-60ab5db145df
mon_initial_members = ceph01,ceph02,ceph03
mon_host = 10.10.10.101,10.10.10.102,10.10.10.103
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
mon allow pool delete = true
public network = 10.10.10.96/28
cluster network = 10.10.10.0/28

[osd]
osd scrub begin hour = 10
osd scrub end hour = 17
osd scrub load threshold = 3
osd max scrubs = 1
osd max backfill = 2
osd recovery max active = 3

[mon.ceph01]
mon allow pool delete = true
mon addr = 10.10.10.101:6789

[mon.ceph02]
mon allow pool delete = true
mon addr = 10.10.10.102:6789


[mon.ceph03]
mon allow pool delete = true
mon addr = 10.10.10.103:6789


Thanks,
Mario
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux