Hi,
I am interested to use rbd block devices inside kvm/qemu VMs. I set up a
tiny ceph cluster using one server machines and used 6 SCSI disks for
storing data. At the client machine, the sequential read throughput
seems to be reasonable (~60 MB/s) when I run fio against rbd block
devices mounted outside of VMs. The read throughput does not seem to be
reasonable when I use rbd block devices as block devices inside kvm/qemu
VMs: it jumps to be as high as 200 MB/s. 'tcpdump' shows read requests
do reach the ceph server. What makes things confusing is the 'iotop'
does not show any I/O for sequential reads while 'top' shows 'ceph-osd'
is utilizing CPU at 100%.
This is the section for the rbd disk in VM's xml file.
--------------------------------------------------------
<disk type='network' device='disk'>
<driver name='qemu' type='raw'/>
<source protocol='rbd' name='rbd/image3'>
<host name='node-0' port='6789'/>
</source>
<target dev='vda' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05'
function='0x0'/>
</disk>
--------------------------------------------------------
This is the fio job file I used to measure throughputs.
------------------------------
[global]
rw=read
bs=4m
thread=0
time_based=1
runtime=300
invalidate=1
direct=1
sync=1
ioengine=sync
[sr-vda]
filename=${DEV}
---------------------------------
Does anyone have some suggestions or hints for me to try? Thank you very
much!
Xing
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html