Unexpectedly slow write performance (RBD cinder volumes)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I have been benchmarking our Ceph installation for the last week or so, and I've come across an issue that I'm having some difficulty with.

Ceph bench reports reasonable write throughput at the OSD level:

ceph tell osd.0 bench
{ "bytes_written": 1073741824,
  "blocksize": 4194304,
  "bytes_per_sec": "47288267.000000"}

Running this across all OSDs produces on average 50-55 MB/s, which is fine with us. We were expecting around 100 MB/s / 2 (journal and OSD on same disk, separate partitions).

What I wasn't expecting was the following:

I tested 1, 2, 4, 8, 16, 24, and 32 VMSs simultaneously writing against 33 OSDs. Aggregate write throughput peaked under 400 MB/s:

1  196.013671875
2  285.8759765625
4  351.9169921875
8  386.455078125
16 363.8583984375
24 353.6298828125
32 348.9697265625

I was hoping to see something closer to # OSDs * Average value for ceph bench (approximately 1.2 GB/s peak aggregate write throughput).

We're seeing excellent read, randread performance, but writes are a bit of a bother.

Does anyone have any suggestions?

We have 20 Gb/s network
I used Fio w/ 16 thread concurrency
We're running Scientific Linux 6.4
2.6.32 kernel
Ceph Dumpling 0.67.1-0.el6
OpenStack Grizzly
Libvirt 0.10.2
qemu-kvm 0.12.1.2-2.355.el6.2.cuttlefish
(I'm using qemu-kvm from the ceph-extras repository, which doesn't appear to have a -.dumpling version yet).

Thanks very much for any assistance.

Greg
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux