Hi Robert
1) Can you specify how many threads were used in the 4k write rados test
? i suspect that only 16 threads were used, this is because it is the
default + also the average latency was 2.9 ms giving average of 344 iops
per thread, your average iops were 5512 divide this by 344 we get 16.02.
If this is the case then this is too low, you have 12 OSDs, you need to
use 64 or 128 threads to get a couple of threads on each OSD to stress
it. use the -t option to specify the thread count. Also better if you
can run more than 1 client process + preferably from different hosts and
get the total iops.
2) The read latency you see of 0.4 ms is good. The write latency of 2.9
ms is not very good but not terrible: a fast all flash bluestore system
should give around 1 to 1.5 ms write latency (ie from around 600 to 1000
iops per thread), some users are able to go below 1 ms but it is not
easy. Disk model as well as tuning your cpu c states and p states
frequency will help reduce latency, there are several topics in this
mailing list that goes into this in great detail + search for a
presentation by Nick Fisk.
3) Running a simple tool like atop while doing the tests can also reveal
a lot on where bottlenecks are, % utilization of disks and cpu are
important. However i expect that if you were using 16 threads only, they
will not be highly utilized as the dominant factor would be latency as
noted earlier.
/Maged
On 24/05/2019 13:22, Robert Sander wrote:
Hi,
we have a small cluster at a customer's site with three nodes and 4
SSD-OSDs each.
Connected with 10G the system is supposed to perform well.
rados bench shows ~450MB/s write and ~950MB/s read speeds with 4MB
objects but only 20MB/s write and 95MB/s read with 4KB objects.
This is a little bit disappointing as the 4K performance is also seen
in KVM VMs using RBD.
Is there anything we can do to improve performance with small objects
/ block sizes?
Jumbo frames have already been enabled.
4MB objects write:
Total time run: 30.218930
Total writes made: 3391
Write size: 4194304
Object size: 4194304
Bandwidth (MB/sec): 448.858
Stddev Bandwidth: 63.5044
Max bandwidth (MB/sec): 552
Min bandwidth (MB/sec): 320
Average IOPS: 112
Stddev IOPS: 15
Max IOPS: 138
Min IOPS: 80
Average Latency(s): 0.142475
Stddev Latency(s): 0.0990132
Max latency(s): 0.814715
Min latency(s): 0.0308732
4MB objects rand read:
Total time run: 30.169312
Total reads made: 7223
Read size: 4194304
Object size: 4194304
Bandwidth (MB/sec): 957.662
Average IOPS: 239
Stddev IOPS: 23
Max IOPS: 272
Min IOPS: 175
Average Latency(s): 0.0653696
Max latency(s): 0.517275
Min latency(s): 0.00201978
4K objects write:
Total time run: 30.002628
Total writes made: 165404
Write size: 4096
Object size: 4096
Bandwidth (MB/sec): 21.5351
Stddev Bandwidth: 2.0575
Max bandwidth (MB/sec): 22.4727
Min bandwidth (MB/sec): 11.0508
Average IOPS: 5512
Stddev IOPS: 526
Max IOPS: 5753
Min IOPS: 2829
Average Latency(s): 0.00290095
Stddev Latency(s): 0.0015036
Max latency(s): 0.0778454
Min latency(s): 0.00174262
4K objects read:
Total time run: 30.000538
Total reads made: 1064610
Read size: 4096
Object size: 4096
Bandwidth (MB/sec): 138.619
Average IOPS: 35486
Stddev IOPS: 3776
Max IOPS: 42208
Min IOPS: 26264
Average Latency(s): 0.000443905
Max latency(s): 0.0123462
Min latency(s): 0.000123081
Regards
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com