On 29/04/2021 11:52 pm, Schmid, Michael wrote:
I am new to ceph and at the moment I am doing some performance tests with a 4 node ceph-cluster (pacific, 16.2.1).
Ceph doesn't do well with small numbers, 4 OSD's is really marginal.
Your latency isn't crash hot either. What size are you running on the
pool? The amount of RAM per node (8GB) would be the bare minimum as
well, your ceph setup is really constrained.
Do your OSD's have access to the raw device? are they bluestore?
Same test on my Cluster
* 5 Node,
* 20 OSD's (Total)
o Mix of SATA and SAS Spinners
o WAL/DB on SSD
* 64GB RAM Per node
* 4 * 1GB Bond
rados bench -p ceph 10 write -b 4M -t 16 --no-cleanup
hints = 1
Maintaining 16 concurrent writes of 4194304 bytes to objects of size
4194304 for up to 10 seconds or 0 objects
Object prefix: benchmark_data_vnh_3642327
sec Cur ops started finished avg MB/s cur MB/s last lat(s) avg
lat(s)
0 0 0 0 0 0 - 0
1 16 58 42 167.99 168 0.21848 0.329228
2 16 102 86 171.986 176 0.456715 0.325869
3 16 154 138 183.983 208 0.109888 0.319586
4 16 206 190 189.981 208 0.188891 0.320275
5 16 258 242 193.581 208 0.261014 0.319318
6 16 308 292 194.647 200 0.450672 0.319268
7 16 358 342 195.408 200 0.127415 0.316999
8 16 406 390 194.98 192 0.176382 0.321384
9 16 456 440 195.535 200 0.287347 0.318749
10 16 508 492 196.779 208 0.279796 0.318067
Total time run: 10.2741
Total writes made: 508
Write size: 4194304
Object size: 4194304
Bandwidth (MB/sec): 197.78
Stddev Bandwidth: 14.2111
Max bandwidth (MB/sec): 208
Min bandwidth (MB/sec): 168
Average IOPS: 49
Stddev IOPS: 3.55278
Max IOPS: 52
Min IOPS: 42
Average Latency(s): 0.318968
Stddev Latency(s): 0.137534
Max latency(s): 0.913779
Min latency(s): 0.0933294
--
Lindsay
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx