Hi,
I have done some rbd benchmarks:
root@recev08:~# rbd bench-write image01 --pool=bench --io-size=4096
--io-total=3G --io-threads=16
rbd: bench-write is deprecated, use rbd bench --io-type write ...
bench type write io_size 4096 io_threads 16 bytes 3221225472 pattern
sequential
SEC OPS OPS/SEC BYTES/SEC
1 59616 59870.9 234 MiB/s
2 127504 63887.2 250 MiB/s
3 198032 66103.5 258 MiB/s
4 262944 65805.2 257 MiB/s
5 277712 55589.6 217 MiB/s
6 291712 46418.8 181 MiB/s
7 305600 35618.9 139 MiB/s
8 319584 24310.2 95 MiB/s
9 332880 13987.1 55 MiB/s
10 346128 13683.1 53 MiB/s
11 359312 13519.9 53 MiB/s
[...]
43 786016 13724.7 54 MiB/s
elapsed: 43 ops: 786432 ops/sec: 18275.3 bytes/sec: 71 MiB/s
We get 18K writes/s with 16 threads.
root@recev08:~# rbd bench-write image01 --pool=bench --io-size=4096
--io-total=3G --io-threads=512
rbd: bench-write is deprecated, use rbd bench --io-type write ...
bench type write io_size 4096 io_threads 512 bytes 3221225472 pattern
sequential
SEC OPS OPS/SEC BYTES/SEC
1 41472 42840.4 167 MiB/s
2 84480 42924.9 168 MiB/s
3 131072 44096.1 172 MiB/s
4 172032 43352.4 169 MiB/s
5 212480 42769.1 167 MiB/s
6 253952 42495.6 166 MiB/s
7 295936 42324.7 165 MiB/s
8 335872 40959.6 160 MiB/s
9 375296 40620 159 MiB/s
10 411136 39730.9 155 MiB/s
11 447488 38706.9 151 MiB/s
12 482816 37286.2 146 MiB/s
13 518656 36556.5 143 MiB/s
14 552960 35589.5 139 MiB/s
15 585728 34890.2 136 MiB/s
16 622080 34946.1 137 MiB/s
17 655872 34583.3 135 MiB/s
18 691200 34536.2 135 MiB/s
19 732160 35782.5 140 MiB/s
20 772608 37375.7 146 MiB/s
elapsed: 20 ops: 786432 ops/sec: 38580.5 bytes/sec: 151 MiB/s
We get 38K writes/s with 512 threads.
Disks are rated for "maximum 58K write random reads".
Seems we are able to bottleneck network with 40K write size:
root@recev08:~# rbd bench-write image01 --pool=bench --io-size=40960
--io-total=10G --io-threads=256
rbd: bench-write is deprecated, use rbd bench --io-type write ...
bench type write io_size 40960 io_threads 256 bytes 10737418240 pattern
sequential
SEC OPS OPS/SEC BYTES/SEC
1 28160 28415.7 1.1 GiB/s
2 55552 28072.2 1.1 GiB/s
3 83200 27930.1 1.1 GiB/s
4 110592 27739.5 1.1 GiB/s
5 134912 27076.7 1.0 GiB/s
6 159232 26235.2 1.0 GiB/s
7 184320 25691.7 1004 MiB/s
8 208896 25098.8 980 MiB/s
9 232704 24441.7 955 MiB/s
10 256512 24339.3 951 MiB/s
elapsed: 10 ops: 262144 ops/sec: 25264.2 bytes/sec: 987 MiB/s
Thanks
El 6/7/22 a las 13:13, Eneko Lacunza escribió:
Hi all,
We have a proof of concept HCI cluster with Proxmox v7 and Ceph v15.
We have 3 nodes:
2x Intel Xeon 5218 Gold (16 core/32 threads per socket)
Dell PERC H330 Controller (SAS3)
4xSamsung PM1634 3.84TB SAS 12Gb SSD
Network is LACP 2x10Gbps
This cluster is used for some VDI tests, with Windows 10 VMs.
Pool has size=3/min=2 and is used for RBD (KVM/QEMU VMs)
We are seeing Ceph performance reaching about 600MiB/s read and
500MiB/s write, and IOPS read about 6.000 and writes about 2.000 .
Read/writes are simultaneous (mixed IO), as reported by Ceph.
Is this a reasonable performance for the hardware we have? We see
about 25-30% CPU used in the nodes, and ceph-osd processes spiking
between 600% and 1000% (I guess it's full 6-10 threads use).
I have checked cache for the disks, but they report cache as "Not
applicable".
BIOS power profile is performance and C states are disabled.
Thanks
Eneko Lacunza
Zuzendari teknikoa | Director técnico
Binovo IT Human Project
Tel. +34 943 569 206 |https://www.binovo.es
Astigarragako Bidea, 2 - 2º izda. Oficina 10-11, 20180 Oiartzun
https://www.youtube.com/user/CANALBINOVO
https://www.linkedin.com/company/37269706/
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx