qemu/rbd: threads vs native, performance tuning

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,


And I am trying to get better performance in my virtual machines.
These are my RBD settings:
    "rbd_cache": "true",
    "rbd_cache_block_writes_upfront": "false",
    "rbd_cache_max_dirty": "25165824",
    "rbd_cache_max_dirty_age": "1.000000",
    "rbd_cache_max_dirty_object": "0",
    "rbd_cache_size": "33554432",
    "rbd_cache_target_dirty": "16777216",
    "rbd_cache_writethrough_until_flush": "true",

I decided to test native mode and ran fio like this inside a VM:
fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=random_read_write.fio --bs=4k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=75

I tested these two setups in qemu.
<driver name='qemu' type='raw' cache='directsync' io='native' discard='unmap'/>
<driver name='qemu' type='raw' cache='writeback' discard='unmap'/>

I ran fio a couple times to have a little variance and here is the results:
<driver name='qemu' type='raw' cache='directsync' io='native' discard='unmap'/>
 READ: io=3071.7MB, aggrb=96718KB/s, minb=96718KB/s, maxb=96718KB/s, mint=32521msec, maxt=32521msec
WRITE: io=1024.4MB, aggrb=32253KB/s, minb=32253KB/s, maxb=32253KB/s, mint=32521msec, maxt=32521msec
 READ: io=3071.7MB, aggrb=96451KB/s, minb=96451KB/s, maxb=96451KB/s, mint=32611msec, maxt=32611msec
WRITE: io=1024.4MB, aggrb=32164KB/s, minb=32164KB/s, maxb=32164KB/s, mint=32611msec, maxt=32611msec
 READ: io=3071.7MB, aggrb=93763KB/s, minb=93763KB/s, maxb=93763KB/s, mint=33546msec, maxt=33546msec
WRITE: io=1024.4MB, aggrb=31267KB/s, minb=31267KB/s, maxb=31267KB/s, mint=33546msec, maxt=33546msec
---
<driver name='qemu' type='raw' cache='writeback' discard='unmap'/>
DISK     = [ driver = "raw" , cache = "directsync" , discard = "unmap" , io = "native" ]
 READ: io=3071.7MB, aggrb=68771KB/s, minb=68771KB/s, maxb=68771KB/s, mint=45737msec, maxt=45737msec
WRITE: io=1024.4MB, aggrb=22933KB/s, minb=22933KB/s, maxb=22933KB/s, mint=45737msec, maxt=45737msec
 READ: io=3071.7MB, aggrb=67794KB/s, minb=67794KB/s, maxb=67794KB/s, mint=46396msec, maxt=46396msec
WRITE: io=1024.4MB, aggrb=22607KB/s, minb=22607KB/s, maxb=22607KB/s, mint=46396msec, maxt=46396msec
 READ: io=3071.7MB, aggrb=67536KB/s, minb=67536KB/s, maxb=67536KB/s, mint=46573msec, maxt=46573msec
WRITE: io=1024.4MB, aggrb=22521KB/s, minb=22521KB/s, maxb=22521KB/s, mint=46573msec, maxt=46573msec

So native is around 30-40% faster than threads according to this.
But I have a few questions now.
1. Is it safe to run cache='directsync' io='native' documentation refers to writeback/threads?
2. How can I get even better performance? These benchmarks are from a pool with 11 NVME Bluestore OSDs, 2x10Gb NIC. It feels pretty slow IMO.

Thanks,
Elias
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux