Hello,
If this is too long for you, TL;DR; section on the bottom
I created a CEPH cluster made of 3 SuperMicro servers, each with 2 OSD
(WD RED spinning drives) and I would like to optimize the performance of
RBD, which I believe is blocked by some wrong CEPH configuration,
because from my observation all resources (CPU, RAM, network, disks) are
basically unused / idling even when I put load on the RBD.
Each drive should be 50MB/s read / write and when I run RADOS benchmark,
I see values that are somewhat acceptable, interesting part is that when
I run RADOS benchmark, I can see all disks read / write to their limits,
I can see heavy network utilization and even some CPU utilization - on
other hand, when I put any load on the RBD device, performance is
terrible, reading is very slow (20MB/s) writing as well (5 - 20MB/s),
running dd if=/dev/zero of=/dev/rbd0 writes at 5MB/s - and the most
weird part - resources are almost unused - no CPU usage, no network
traffic, minimal disk activity.
It looks to me like if CEPH wasn't even trying to perform much as long
as the access is via RBD, did anyone ever saw this kind of issue? Is
there any way to track down why it is so slow? Here are some outputs:
[root@ceph1 cephadm]# ceph --version
ceph version 14.2.4 (75f4de193b3ea58512f204623e6c5a16e6c1e1ba) nautilus
(stable)
[root@ceph1 cephadm]# ceph health
HEALTH_OK
I would expect write speed to be at least the 50MB/s which is speed when
writing to disks directly, rados bench does this speed (sometimes even
more):
[root@ceph1 cephadm]# rados bench -p testbench 10 write --no-cleanup
hints = 1
Maintaining 16 concurrent writes of 4194304 bytes to objects of size
4194304 for up to 10 seconds or 0 objects
Object prefix: benchmark_data_ceph1.lan.insw.cz_60873
sec Cur ops started finished avg MB/s cur MB/s last lat(s) avg
lat(s)
0 0 0 0 0 0 - 0
1 16 22 6 23.9966 24 0.966194 0.565671
2 16 37 21 41.9945 60 1.86665 0.720606
3 16 54 38 50.6597 68 1.07856 0.797677
4 16 70 54 53.9928 64 1.58914 0.86644
5 16 83 67 53.5924 52 0.208535 0.884525
6 16 97 81 53.9923 56 2.22661 0.932738
7 16 111 95 54.2781 56 1.0294 0.964574
8 16 133 117 58.4921 88 0.883543 1.03648
9 16 143 127 56.4369 40 0.352169 1.00382
10 16 154 138 55.1916 44 0.227044 1.04071
Read speed is even higher as it's probably reading from multiple devices
at once:
[root@ceph1 cephadm]# rados bench -p testbench 100 seq
hints = 1
sec Cur ops started finished avg MB/s cur MB/s last lat(s) avg
lat(s)
0 0 0 0 0 0 - 0
1 16 96 80 319.934 320 0.811192 0.174081
2 13 161 148 295.952 272 0.606672 0.181417
Running rbd bench show writes at 50MB/s (which is OK) and reads at
20MB/s (not so OK), but the REAL performance is much worse - when I
actually access the block device and try to write or read anything it's
sometimes extremely low as in 5MB/s or 20MB/s only.
Why is that? What can I do to debug / trace / optimize this issue? I
don't know if there is any point in upgrading the hardware if according
to monitoring current HW is basically not being utilized at all.
TL;DR;
I created a ceph cluster from 6 OSD (dedicated 1G net, 6 4TB spinning
drives), the rados performance benchmark shows acceptable performance,
but RBD peformance is absolutely terrible (very slow read and very slow
write). When I put any kind of load on cluster almost all resources are
unused / idling, so this makes me feel like software configuration issue.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com