Re: rbd cache + libvirt

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,
>>looking at the latest version of QEMU,

It's seem that it's was already this behaviour since the add of rbd_cache parsing in rbd.c by josh in 2012

http://git.qemu.org/?p=qemu.git;a=blobdiff;f=block/rbd.c;h=eebc3344620058322bb53ba8376af4a82388d277;hp=1280d66d3ca73e552642d7a60743a0e2ce05f664;hb=b11f38fcdf837c6ba1d4287b1c685eb3ae5351a8;hpb=166acf546f476d3594a1c1746dc265f1984c5c85


I'll do tests on my side tomorrow to be sure.



----- Mail original -----
De: "Jason Dillaman" <dillaman@xxxxxxxxxx>
À: "Arnaud Virlet" <avirlet@xxxxxxxxxxxxxxx>
Cc: "ceph-users" <ceph-users@xxxxxxxxxxxxxx>
Envoyé: Lundi 8 Juin 2015 17:50:53
Objet: Re:  rbd cache + libvirt

Hmm ... looking at the latest version of QEMU, it appears that the RBD cache settings are changed prior to reading the configuration file instead of overriding the value after the configuration file has been read [1]. Try specifying the path to a new configuration file via the "conf=/path/to/my/new/ceph.conf" QEMU parameter where the RBD cache is explicitly disabled [2]. 


[1] http://git.qemu.org/?p=qemu.git;a=blob;f=block/rbd.c;h=fbe87e035b12aab2e96093922a83a3545738b68f;hb=HEAD#l478 
[2] http://ceph.com/docs/master/rbd/qemu-rbd/#usage 

-- 

Jason Dillaman 
Red Hat 
dillaman@xxxxxxxxxx 
http://www.redhat.com 


----- Original Message ----- 
> From: "Arnaud Virlet" <avirlet@xxxxxxxxxxxxxxx> 
> To: "Andrey Korolyov" <andrey@xxxxxxx> 
> Cc: ceph-users@xxxxxxxxxxxxxx 
> Sent: Monday, June 8, 2015 11:36:46 AM 
> Subject: Re:  rbd cache + libvirt 
> 
> 
> 
> On 06/08/2015 03:17 PM, Andrey Korolyov wrote: 
> > On Mon, Jun 8, 2015 at 3:44 PM, Arnaud Virlet <avirlet@xxxxxxxxxxxxxxx> 
> > wrote: 
> >> 
> >> 
> >> On 06/08/2015 01:59 PM, Andrey Korolyov wrote: 
> >>> 
> >>> 
> >>> Am I understand you right that you are using certain template engine 
> >>> for both OCFS- and RBD-backed volumes within a single VM` config and 
> >>> it does not allow per-disk cache mode separation in a suggested way? 
> >>> 
> >> My VM has 3 disks on RBD backend. disks 1 and 2 have cache=writeback, disk 
> >> 3 
> >> ( for ocfs2 ) has cache=none in my VM XML file. When I start the VM, 
> >> libvirt produce a launch string with cache=wtriteback for disk 1/2, and 
> >> with 
> >> cache=none for disk 3. 
> >> Even if "cache = none" for disk 3, it seems doesn't take effect without 
> >> set 
> >> rbd cache = false in ceph.conf. 
> > 
> > It is very strange and contradictive to what it should be. Could you 
> > post a resulting qemu argument string, by a chance? Also please share 
> > a method which you are using to determine if disk uses emulator cache 
> > or not. 
> > 
> 
> Here my qemu arguments strings for the related VM: 
> 
> /usr/bin/qemu-system-x86_64 -name www-pa2-01 -S -machine 
> pc-i440fx-1.6,accel=kvm,usb=off -m 2048 -realtime mlock=off -smp 
> 2,sockets=2,cores=1,threads=1 -uuid 3542c57c-dd47-44cd-933f-7dae0b949012 
> -no-user-config -nodefaults -chardev 
> socket,id=charmonitor,path=/var/lib/libvirt/qemu/www-pa2-01.monitor,server,nowait 
> -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc 
> -no-shutdown -boot order=c,menu=on,strict=on -device 
> piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device 
> virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 -drive 
> file=rbd:libvirt-pool/www-pa2-01:id=libvirt:key=XXXX:auth_supported=cephx\;none:mon_host=1.1.1.1\:6789\;1.1.1.2\:6789\;1.1.1.3\:6789,if=none,id=drive-virtio-disk0,format=raw,cache=writeback 
> -device 
> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0 
> -drive 
> file=rbd:libvirt-pool/www-pa2-01-data:id=libvirt:key=XXX:auth_supported=cephx\;none:mon_host=1.1.1.1\:6789\;1.1.1.2\:6789\;1.1.1.3\:6789,if=none,id=drive-virtio-disk1,format=raw,cache=writeback 
> -device 
> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=drive-virtio-disk1,id=virtio-disk1 
> -drive 
> file=rbd:libvirt-pool/www-pa2-webmutu:id=libvirt:key=XXX:auth_supported=cephx\;none:mon_host=1.1.1.1\:6789\;1.1.1.2\:6789\;1.1.1.3\:6789,if=none,id=drive-virtio-disk2,format=raw,cache=none 
> -device 
> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x9,drive=drive-virtio-disk2,id=virtio-disk2 
> -drive if=none,id=drive-ide0-1-0,readonly=on,format=raw -device 
> ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -netdev 
> tap,fd=29,id=hostnet0,vhost=on,vhostfd=34 -device 
> virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:90:53:6b,bus=pci.0,addr=0x3 
> -netdev tap,fd=35,id=hostnet1,vhost=on,vhostfd=36 -device 
> virtio-net-pci,netdev=hostnet1,id=net1,mac=52:54:00:7b:b9:85,bus=pci.0,addr=0x8 
> -netdev tap,fd=37,id=hostnet2,vhost=on,vhostfd=38 -device 
> virtio-net-pci,netdev=hostnet2,id=net2,mac=52:54:00:2e:ce:f6,bus=pci.0,addr=0xa 
> -chardev pty,id=charserial0 -device 
> isa-serial,chardev=charserial0,id=serial0 -vnc 127.0.0.1:5 -device 
> cirrus-vga,id=video0,bus=pci.0,addr=0x2 -device 
> virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 -msg timestamp=on 
> 
> For the disk without cache: 
> 
> -device 
> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=drive-virtio-disk1,id=virtio-disk1 
> -drive 
> file=rbd:libvirt-pool/www-pa2-webmutu:id=libvirt:key=XXX:auth_supported=cephx\;none:mon_host=1.1.1.1\:6789\;1.1.1.2\:6789\;1.1.1.3\:6789,if=none,id=drive-virtio-disk2,format=raw,cache=none 
> 
> 
> 
> I don't really have method to determine if disk use emulator's cache or 
> not. 
> When I was testing if my ocfs2 cluster work correctly, I realized that 
> if rbd "cache = true" in ceph.conf and "cache=none" in XML file, my 
> ocfs2 cluster doesn't work. Cluster's member doesn't see if members join 
> or leave the cluster. 
> But if "rbd cache = false" in ceph.conf and "cache = none" in XML. OCFS2 
> cluster work, cluster's members see the other members when they join or 
> leave. 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> _______________________________________________ 
> ceph-users mailing list 
> ceph-users@xxxxxxxxxxxxxx 
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
> 
_______________________________________________ 
ceph-users mailing list 
ceph-users@xxxxxxxxxxxxxx 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux