Confidentiality: This email and any attachments are confidential and may be subject to copyright, legal or some other professional privilege. They are intended solely for the attention and use of the named addressee(s). They may only be copied, distributed or disclosed with the consent of the copyright owner. If you have received this email by mistake or by breach of the confidentiality clause, please notify the sender immediately by return email and delete or destroy all copies of the email. Any confidentiality, privilege or copyright is not waived or lost because this email has been sent to you by mistake.
Are you using image-format 2 RBD images?
We found a major performance hit using format 2 images under 10.2.0 today in some testing. When we switched to using format 1 images we literally got 10x random write IOPS performance (1600 IOPs up to 30000 IOPS for the same test).
From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of Ken Peng
Sent: Wednesday, 25 May 2016 5:02 PM
To: ceph-users@xxxxxxxxxxxxxx
Subject: seqwrite gets good performance but random rw gets worse
Hello,
We have a cluster with 20+ hosts and 200+ OSDs, each 4T SATA disk for an OSD, no SSD cache.
OS is Ubuntu 16.04 LTS, ceph version 10.2.0
Both data network and cluster network are 10Gbps.
We run ceph as block storage service only (rbd client within VM).
For testing within a VM with sysbench tool, we see that the seqwrite has a relatively good performance, it can reach 170.37Mb/sec, but random read/write always gets bad result, it can be only 474.63Kb/sec (shown as below).
Can you help give the idea why the random IO is so worse? Thanks.This is what sysbench outputs,
# sysbench --test=fileio --file-total-size=5G prepare
sysbench 0.4.12: multi-threaded system evaluation benchmark
128 files, 40960Kb each, 5120Mb total
Creating files for the test...
# sysbench --test=fileio --file-total-size=5G --file-test-mode=seqwr --init-rng=on --max-time=300 --max-requests=0 run
sysbench 0.4.12: multi-threaded system evaluation benchmark
Running the test with following options:
Number of threads: 1
Initializing random number generator from timer.
Extra file open flags: 0
128 files, 40Mb each
5Gb total file size
Block size 16Kb
Periodic FSYNC enabled, calling fsync() each 100 requests.
Calling fsync() at the end of test, Enabled.
Using synchronous I/O mode
Doing sequential write (creation) test
Threads started!
Done.
Operations performed: 0 Read, 327680 Write, 128 Other = 327808 Total
Read 0b Written 5Gb Total transferred 5Gb (170.37Mb/sec)
10903.42 Requests/sec executed
Test execution summary:
total time: 30.0530s
total number of events: 327680
total time taken by event execution: 28.5936
per-request statistics:
min: 0.01ms
avg: 0.09ms
max: 192.84ms
approx. 95 percentile: 0.03ms
Threads fairness:
events (avg/stddev): 327680.0000/0.00
execution time (avg/stddev): 28.5936/0.00
# sysbench --test=fileio --file-total-size=5G --file-test-mode=rndrw --init-rng=on --max-time=300 --max-requests=0 run
sysbench 0.4.12: multi-threaded system evaluation benchmark
Running the test with following options:
Number of threads: 1
Initializing random number generator from timer.
Extra file open flags: 0
128 files, 40Mb each
5Gb total file size
Block size 16Kb
Number of random requests for random IO: 0
Read/Write ratio for combined random IO test: 1.50
Periodic FSYNC enabled, calling fsync() each 100 requests.
Calling fsync() at the end of test, Enabled.
Using synchronous I/O mode
Doing random r/w test
Threads started!
Time limit exceeded, exiting...
Done.
Operations performed: 5340 Read, 3560 Write, 11269 Other = 20169 Total
Read 83.438Mb Written 55.625Mb Total transferred 139.06Mb (474.63Kb/sec)
29.66 Requests/sec executed
Test execution summary:
total time: 300.0216s
total number of events: 8900
total time taken by event execution: 6.4774
per-request statistics:
min: 0.01ms
avg: 0.73ms
max: 90.18ms
approx. 95 percentile: 1.60ms
Threads fairness:
events (avg/stddev): 8900.0000/0.00
execution time (avg/stddev): 6.4774/0.00
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com