Ceph Poor RBD Performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi. I'm having performance issue about ceph rbd. The performance is not i expected according to my node metrics. here're the metrics.
I've used Calico as CNI.
version: Rook-ceph 1.6
I've used stock yaml files and rook is not running on host network
Centos 8 Stream

[root@node4 ~]# uname -a
Linux node4 4.18.0-240.el8.x86_64 #1 SMP Fri Sep 25 19:48:47 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux


[root@rook-ceph-tools-fc5f9586c-nb2h7 /]# ceph health
HEALTH_WARN mons are allowing insecure global_id reclaim; clock skew detected on mon.b, mon.c; 1 pool(s) do not have an application enabled



[root@node1 ~]# kubectl version
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.7", GitCommit:"132a687512d7fb058d0f5890f07d4121b3f0a2e2", GitTreeState:"clean", BuildDate:"2021-05-12T12:40:09Z", GoVersion:"go1.15.12", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.7", GitCommit:"132a687512d7fb058d0f5890f07d4121b3f0a2e2", GitTreeState:"clean", BuildDate:"2021-05-12T12:32:49Z", GoVersion:"go1.15.12", Compiler:"gc", Platform:"linux/amd64"}


[root@node4 ~]# iperf3  -c 172.16.11.181 -p 3000 M
Connecting to host 172.16.11.181, port 3000
[  5] local 172.16.11.180 port 46390 connected to 172.16.11.181 port 3000
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec   201 MBytes  1.69 Gbits/sec  488   48.1 KBytes
[  5]   1.00-2.00   sec   160 MBytes  1.34 Gbits/sec  397   39.6 KBytes
[  5]   2.00-3.00   sec   201 MBytes  1.68 Gbits/sec  513   69.3 KBytes
[  5]   3.00-4.00   sec   200 MBytes  1.68 Gbits/sec  374   38.2 KBytes
[  5]   4.00-5.00   sec   199 MBytes  1.67 Gbits/sec  402   48.1 KBytes
[  5]   5.00-6.00   sec   201 MBytes  1.69 Gbits/sec  559   48.1 KBytes
[  5]   6.00-7.00   sec   204 MBytes  1.71 Gbits/sec  470   45.2 KBytes
[  5]   7.00-8.00   sec   199 MBytes  1.67 Gbits/sec  575   46.7 KBytes
[  5]   8.00-9.00   sec   200 MBytes  1.68 Gbits/sec  404   49.5 KBytes
[  5]   9.00-10.00  sec   200 MBytes  1.68 Gbits/sec  391   49.5 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  1.92 GBytes  1.65 Gbits/sec  4573             sender
[  5]   0.00-10.04  sec  1.92 GBytes  1.64 Gbits/sec                  receiver


*In pod that mounted rbd volume
cassandra@k8ssandra-dc1-default-sts-0:/var/lib/cassandra$ dd if=/dev/zero of=/var/lib/cassandra/test.img bs=1G count=1 oflag=dsync
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 28.051 s, 38.3 MB/s


[root@rook-ceph-tools-fc5f9586c-nb2h7 /]# ceph status
  cluster:
    id:     d6584e08-f8f5-43e4-a258-8d652cc28e0a
    health: HEALTH_WARN
            mons are allowing insecure global_id reclaim
            clock skew detected on mon.b, mon.c
            1 pool(s) do not have an application enabled
  services:
    mon: 3 daemons, quorum a,b,c (age 32h)
    mgr: a(active, since 3h)
    osd: 24 osds: 24 up (since 32h), 24 in (since 3d)
  data:
    pools:   4 pools, 97 pgs
    objects: 181.38k objects, 686 GiB
    usage:   1.3 TiB used, 173 TiB / 175 TiB avail
    pgs:     97 active+clean
  io:
    client:   308 KiB/s wr, 0 op/s rd, 4 op/s wr
[root@rook-ceph-tools-fc5f9586c-nb2h7 /]# rados bench -p scbench 15 write  seq --no-cleanup
hints = 1
Maintaining 16 concurrent writes of 4194304 bytes to objects of size 4194304 for up to 15 seconds or 0 objects
Object prefix: benchmark_data_rook-ceph-tools-fc5f9586c-nb2_37098
  sec Cur ops   started  finished  avg MB/s  cur MB/s last lat(s)  avg lat(s)
    0       0         0         0         0         0           -           0
    1      16        19         3   11.9985        12    0.480534    0.377567
    2      16        22         6   11.9986        12     1.77015     1.04767
    3      16        27        11   14.6652        20     2.48028     1.26595
    4      16        29        13   12.9988         8    0.766731     1.18763
    5      16        34        18   14.3986        20     4.95184     1.44111
    6      16        41        25   16.6651        28    0.742371     1.91373
    7      16        45        29   16.5699        16     1.00815     2.14879
    8      16        49        33   16.4985        16     5.65826      2.1447
    9      16        57        41   18.2206        32    0.941691     2.46741
   10      16        65        49   19.5982        32     4.63243     2.45243
   11      16        70        54   19.6346        20    0.122698      2.4876
   12      16        74        58   19.3316        16    0.700267     2.50753
   13      16        76        60   18.4599         8      1.3582     2.54338
   14      16        78        62   17.7127         8     5.65752     2.63185
   15      16        80        64   17.0651         8     5.25659     2.77212
   16      15        80        65   16.2485         4     5.95131     2.82103
   17      11        80        69   16.2337        16     10.4721     3.09282
   18       6        80        74   16.4429        20     7.71085     3.23797
   19       1        80        79     16.63        20     5.98475     3.47738
Total time run:         19.0674
Total writes made:      80
Write size:             4194304
Object size:            4194304
Bandwidth (MB/sec):     16.7826
Stddev Bandwidth:       8.02919
Max bandwidth (MB/sec): 32
Min bandwidth (MB/sec): 4
Average IOPS:           4
Stddev IOPS:            2.0073
Max IOPS:               8
Min IOPS:               1
Average Latency(s):     3.51805
Stddev Latency(s):      2.97502
Max latency(s):         10.4721
Min latency(s):         0.104524


[root@rook-ceph-tools-fc5f9586c-nb2h7 /]# ceph osd tree
ID   CLASS  WEIGHT     TYPE NAME       STATUS  REWEIGHT  PRI-AFF
 -1         174.65735  root default
 -5          36.38695      host node4
  0    hdd    7.27739          osd.0       up   1.00000  1.00000
  4    hdd    7.27739          osd.4       up   1.00000  1.00000
  8    hdd    7.27739          osd.8       up   1.00000  1.00000
 12    hdd    7.27739          osd.12      up   1.00000  1.00000
 16    hdd    7.27739          osd.16      up   1.00000  1.00000
 -9          36.38695      host node5
  2    hdd    7.27739          osd.2       up   1.00000  1.00000
  6    hdd    7.27739          osd.6       up   1.00000  1.00000
 10    hdd    7.27739          osd.10      up   1.00000  1.00000
 14    hdd    7.27739          osd.14      up   1.00000  1.00000
 18    hdd    7.27739          osd.18      up   1.00000  1.00000
 -7          36.38695      host node6
  1    hdd    7.27739          osd.1       up   1.00000  1.00000
  5    hdd    7.27739          osd.5       up   1.00000  1.00000
  9    hdd    7.27739          osd.9       up   1.00000  1.00000
 13    hdd    7.27739          osd.13      up   1.00000  1.00000
 17    hdd    7.27739          osd.17      up   1.00000  1.00000
 -3          29.10956      host node7
  3    hdd    7.27739          osd.3       up   1.00000  1.00000
  7    hdd    7.27739          osd.7       up   1.00000  1.00000
 11    hdd    7.27739          osd.11      up   1.00000  1.00000
 15    hdd    7.27739          osd.15      up   1.00000  1.00000
-11          36.38695      host node8
 19    hdd    7.27739          osd.19      up   1.00000  1.00000
 20    hdd    7.27739          osd.20      up   1.00000  1.00000
 21    hdd    7.27739          osd.21      up   1.00000  1.00000
 22    hdd    7.27739          osd.22      up   1.00000  1.00000
 23    hdd    7.27739          osd.23      up   1.00000  1.00000


[root@node1 ~]# kubectl get pods -n rook-ceph
NAME                                              READY   STATUS      RESTARTS   AGE
csi-cephfsplugin-9x4dc                            3/3     Running     9          14d
csi-cephfsplugin-cv7k9                            3/3     Running     9          14d
csi-cephfsplugin-f6s4k                            3/3     Running     22         14d
csi-cephfsplugin-fn9w4                            3/3     Running     12         14d
csi-cephfsplugin-provisioner-59499cbcdd-txtjq     6/6     Running     0          33h
csi-cephfsplugin-provisioner-59499cbcdd-wj9dx     6/6     Running     11         4d7h
csi-cephfsplugin-tdp99                            3/3     Running     9          14d
csi-rbdplugin-4bv84                               3/3     Running     12         14d
csi-rbdplugin-ddgq8                               3/3     Running     22         14d
csi-rbdplugin-hnnx2                               3/3     Running     10         14d
csi-rbdplugin-provisioner-857d65496c-qzbcq        6/6     Running     0          33h
csi-rbdplugin-provisioner-857d65496c-s6nnd        6/6     Running     9          4d7h
csi-rbdplugin-prthz                               3/3     Running     9          14d
csi-rbdplugin-tjj7z                               3/3     Running     12         14d
rook-ceph-crashcollector-node4-5bb88d4866-rfv54   1/1     Running     1          4d7h
rook-ceph-crashcollector-node5-75dcbb9f45-5tlv8   1/1     Running     0          33h
rook-ceph-crashcollector-node6-777c9f579b-zzf69   1/1     Running     0          33h
rook-ceph-crashcollector-node7-84464d7cc5-2zthq   1/1     Running     0          33h
rook-ceph-crashcollector-node8-66bcdc9c4b-drcj2   1/1     Running     0          3d12h
rook-ceph-mgr-a-fb794cf97-cbmws                   1/1     Running     26         34h
rook-ceph-mon-a-58bf5db667-lhv26                  1/1     Running     0          33h
rook-ceph-mon-b-78df76b9f8-p7w69                  1/1     Running     0          33h
rook-ceph-mon-c-6fc84459fd-nc8hx                  1/1     Running     0          4d8h
rook-ceph-operator-65965c66b5-z5xfl               1/1     Running     2          4d7h
rook-ceph-osd-0-59747d74d6-lg5lh                  1/1     Running     16         4d7h
rook-ceph-osd-1-56c66f7d49-vxj9m                  1/1     Running     0          33h
rook-ceph-osd-10-6cf5f76d57-g5jhb                 1/1     Running     0          33h
rook-ceph-osd-11-5446ff487f-qsdmw                 1/1     Running     0          33h
rook-ceph-osd-12-db4859779-5vrvm                  1/1     Running     8          4d7h
rook-ceph-osd-13-65d4859986-zx8kx                 1/1     Running     0          33h
rook-ceph-osd-14-5c6db6c8c9-ghhtk                 1/1     Running     0          33h
rook-ceph-osd-15-568c6bbf64-m2f2v                 1/1     Running     0          33h
rook-ceph-osd-16-7d475cc4cc-9lzkp                 1/1     Running     8          4d7h
rook-ceph-osd-17-6487d4ff79-5x8bh                 1/1     Running     0          33h
rook-ceph-osd-18-7dd896c985-bjvbz                 1/1     Running     0          33h
rook-ceph-osd-19-5ccfbf896f-gll6k                 1/1     Running     1          4d8h
rook-ceph-osd-2-7db5897c7b-fxc4k                  1/1     Running     0          33h
rook-ceph-osd-20-7b7475f95c-cfprr                 1/1     Running     1          4d8h
rook-ceph-osd-21-7c544458cf-mfbzr                 1/1     Running     0          4d8h
rook-ceph-osd-22-76ff6df84-tf4hx                  1/1     Running     0          4d8h
rook-ceph-osd-23-86b976fdb4-c5xz9                 1/1     Running     0          4d8h
rook-ceph-osd-3-6ccbd79b9d-dvrwj                  1/1     Running     0          33h
rook-ceph-osd-4-56c4c85686-pm488                  1/1     Running     9          4d7h
rook-ceph-osd-5-75dd798967-rmhrp                  1/1     Running     0          33h
rook-ceph-osd-6-f66fd44fb-f2h2d                   1/1     Running     0          33h
rook-ceph-osd-7-6fbb57486c-4xvzh                  1/1     Running     0          33h
rook-ceph-osd-8-59567ccf6-xnmx7                   1/1     Running     8          4d7h
rook-ceph-osd-9-5b9bc76c98-f8dbf                  1/1     Running     0          33h
rook-ceph-osd-prepare-node4-jgwvb                 0/1     Completed   0          3h1m
rook-ceph-osd-prepare-node5-jwvsk                 0/1     Completed   0          3h1m
rook-ceph-osd-prepare-node6-r2ps9                 0/1     Completed   0          3h1m
rook-ceph-osd-prepare-node7-cmfsf                 0/1     Completed   0          3h1m
rook-ceph-osd-prepare-node8-kk256                 0/1     Completed   0          3h1m
rook-ceph-tools-fc5f9586c-nb2h7                   1/1     Running     0          34h

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux