Re: rbd cache + libvirt

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 06/08/2015 03:17 PM, Andrey Korolyov wrote:
On Mon, Jun 8, 2015 at 3:44 PM, Arnaud Virlet <avirlet@xxxxxxxxxxxxxxx> wrote:


On 06/08/2015 01:59 PM, Andrey Korolyov wrote:


Am I understand you right that you are using certain template engine
for both OCFS- and RBD-backed volumes within a single VM` config and
it does not allow per-disk cache mode separation in a suggested way?

My VM has 3 disks on RBD backend. disks 1 and 2 have cache=writeback, disk 3
( for  ocfs2 ) has cache=none in my VM XML file. When I start the VM,
libvirt produce a launch string with cache=wtriteback for disk 1/2, and with
cache=none for disk 3.
Even if "cache = none" for disk 3, it seems doesn't take effect without set
rbd cache = false in ceph.conf.

It is very strange and contradictive to what it should be. Could you
post a resulting qemu argument string, by a chance? Also please share
a method which you are using to determine if disk uses emulator cache
or not.


Here my qemu arguments strings for the related VM:

/usr/bin/qemu-system-x86_64 -name www-pa2-01 -S -machine pc-i440fx-1.6,accel=kvm,usb=off -m 2048 -realtime mlock=off -smp 2,sockets=2,cores=1,threads=1 -uuid 3542c57c-dd47-44cd-933f-7dae0b949012 -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/www-pa2-01.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -boot order=c,menu=on,strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 -drive file=rbd:libvirt-pool/www-pa2-01:id=libvirt:key=XXXX:auth_supported=cephx\;none:mon_host=1.1.1.1\:6789\;1.1.1.2\:6789\;1.1.1.3\:6789,if=none,id=drive-virtio-disk0,format=raw,cache=writeback -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0 -drive file=rbd:libvirt-pool/www-pa2-01-data:id=libvirt:key=XXX:auth_supported=cephx\;none:mon_host=1.1.1.1\:6789\;1.1.1.2\:6789\;1.1.1.3\:6789,if=none,id=drive-virtio-disk1,format=raw,cache=writeback -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=drive-virtio-disk1,id=virtio-disk1 -drive file=rbd:libvirt-pool/www-pa2-webmutu:id=libvirt:key=XXX:auth_supported=cephx\;none:mon_host=1.1.1.1\:6789\;1.1.1.2\:6789\;1.1.1.3\:6789,if=none,id=drive-virtio-disk2,format=raw,cache=none -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x9,drive=drive-virtio-disk2,id=virtio-disk2 -drive if=none,id=drive-ide0-1-0,readonly=on,format=raw -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -netdev tap,fd=29,id=hostnet0,vhost=on,vhostfd=34 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:90:53:6b,bus=pci.0,addr=0x3 -netdev tap,fd=35,id=hostnet1,vhost=on,vhostfd=36 -device virtio-net-pci,netdev=hostnet1,id=net1,mac=52:54:00:7b:b9:85,bus=pci.0,addr=0x8 -netdev tap,fd=37,id=hostnet2,vhost=on,vhostfd=38 -device virtio-net-pci,netdev=hostnet2,id=net2,mac=52:54:00:2e:ce:f6,bus=pci.0,addr=0xa -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -vnc 127.0.0.1:5 -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 -msg timestamp=on

For the disk without cache:

-device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=drive-virtio-disk1,id=virtio-disk1 -drive file=rbd:libvirt-pool/www-pa2-webmutu:id=libvirt:key=XXX:auth_supported=cephx\;none:mon_host=1.1.1.1\:6789\;1.1.1.2\:6789\;1.1.1.3\:6789,if=none,id=drive-virtio-disk2,format=raw,cache=none



I don't really have method to determine if disk use emulator's cache or not. When I was testing if my ocfs2 cluster work correctly, I realized that if rbd "cache = true" in ceph.conf and "cache=none" in XML file, my ocfs2 cluster doesn't work. Cluster's member doesn't see if members join or leave the cluster. But if "rbd cache = false" in ceph.conf and "cache = none" in XML. OCFS2 cluster work, cluster's members see the other members when they join or leave.






















_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux