pushing rbd write performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all,

I've been testing a small cluster with 4 nodes and ssd storage, it seems i cant manage to push it beyond ~250 MB/s qemu - ~300MB/s krbd with quite erratic bandwidth behavior.

4 nodes, 3 ssd's each, cut to 160GB partition to get uniform i/o distribution. Raw disk performance is around <300MB/s + up to 40k iops (various intel dries).

Network: 10g eth, jumbo frames 9k, txqueue 20k
Backing fs is btrfs, 10G journal

Rbd was set up with 8m object size, no striping.
Both krbd i qemu was ran on one of the nodes, used fio for benchmarking with various settings.

Any ideas?

relevant config part (some values may not make much sense since i tried many ways to push it harder):
[client]
rbd_cache = true
rbd_cache_writethrough_until_flush = false
rbd_cache_size = 2 GiB
rbd_cache_max_dirty = 2 GiB
rbd_cache_target_dirty = 256 MiB
rbd_cache_max_dirty_age = 1.0

[osd]
max open files = 112400
osd op threads = 12
osd disk threads = 1
journal dio = true
journal aio = true
journal max write bytes = 1 GiB
journal max write entries = 50000
journal queue max bytes = 1 GiB
journal queue max ops = 50000

filestore op threads = 6
filestore queue max ops = 4096
filestore queue max bytes = 16 MiB
filestore queue committing max ops = 4096
filestore queue committing max bytes = 16 MiB
filestore min sync interval = 15
filestore max sync interval = 15
filestore fd cache size = 10240
filestore journal parallel = true

--
Regards,
Konrad Gutkowski
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux