Hi All,
---- KVM Ceph VM disk setting --
we have build a Ceph with 72 OSDs, replica 3, all working fine. we have done some performance testing. we found a very interesting issue.
we have a KVM + Libvirt + Ceph setup
Case 1. KVM + Libvirt + Ceph w/ rbd backend
The kvm hypervisor node create a VM and use rbd block device as storage backend. we do dd fdatasync, it got ~500MB/s
Case 2. KVM + Libvirt + Ceph
The KVM hypervisor node mount the Ceph storage pool directly as local partition eg. mount /dev/rbd0 /mnt/VM_pool, we we create VM with format qcow2 and place the vm disk under that partition, and do the same dd with fdatasync, it got ~850MB/s
it is tested within same hypervisor node with same VM configurations. why the directory mount rbd0 to hypervisor filesystem as partition will be much good performance?..any idea on this ?
Case 2. KVM + Libvirt + Ceph
The KVM hypervisor node mount the Ceph storage pool directly as local partition eg. mount /dev/rbd0 /mnt/VM_pool, we we create VM with format qcow2 and place the vm disk under that partition, and do the same dd with fdatasync, it got ~850MB/s
it is tested within same hypervisor node with same VM configurations. why the directory mount rbd0 to hypervisor filesystem as partition will be much good performance?..any idea on this ?
thank you!
---- KVM Ceph VM disk setting --
<disk type='network' device='disk'>
<source protocol='rbd' name='VM_pool/VM1.img' >
<host name='mon1' port='6789'/>
<host name='mon2' port='6789'/>
<host name='mon3' port='6789'/>
</source>
<auth username='libvirt' type='ceph'>
<secret type='ceph' uuid='856b660e-ce4e-4a91-a7be-f17e469024c5'/>
</auth>
<target dev='vda' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
</disk>
-- KVM VM disk created at Ceph rbd0 partition
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/mnt/VM_Pool/CentOS1.qcow2'/>
<target dev='vda' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
</disk>
=================================
Bill_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com