Re: QEMU -drive setting (if=none) for rbd

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This is where it actually confuses me. According to the ceph document (http://ceph.com/docs/master/rbd/qemu-rbd/),

"QEMU’s cache settings override Ceph’s default settings (i.e., settings that are not explicitly set in the Ceph configuration file). If you explicitly set RBD Cache settings in your Ceph configuration file, your Ceph settings override the QEMU cache settings. If you set cache settings on the QEMU command line, the QEMU command line settings override the Ceph configuration file settings."

If I set the qemu caching parameter in nova.conf by,

      disk_cachemodes="network=writeback"

This would give me "cache=writeback" in the qemu cmd argument for rbd device when VM is created. According to the ceph doc above,  It would be equivalent to setting of "rbd_cache = true". Since I am not specifying any other rbd parameters (e.g, rbd_cache_size, etc) in the qemu command line (or it can't be done anyway according to the blueprint), those should be default to what I have set in ceph.conf?

Or my understanding of the ceph document is completely off-base?

--weiguo

P.S.,   

My original question is actually regarding how the "if=" parameter impacts the rbd performance, which is not directly related to rbd caching configuration.





From: sebastien.han@xxxxxxxxxxxx
Date: Thu, 13 Jun 2013 15:59:06 +0200
To: Oliver.Francke@xxxxxxxx
CC: ceph-users@xxxxxxxxxxxxxx; wsun2@xxxxxxxxxxx
Subject: Re: QEMU -drive setting (if=none) for rbd

OpenStack doesn't know how to set different caching options for attached block device.

This might be implemented for Havana.

Cheers.

––––
Sébastien Han
Cloud Engineer

"Always give 100%. Unless you're giving blood."









Phone : +33 (0)1 49 70 99 72 – Mobile : +33 (0)6 52 84 44 70
Email : sebastien.han@xxxxxxxxxxxx – Skype : han.sbastien
Address : 10, rue de la Victoire – 75009 Paris
Web : www.enovance.com – Twitter : @enovance

On Jun 11, 2013, at 7:43 PM, Oliver Francke <Oliver.Francke@xxxxxxxx> wrote:

Hi,

Am 11.06.2013 um 19:14 schrieb w sun <wsun2@xxxxxxxxxxx>:

Hi,

We are currently testing the performance with rbd caching enabled with write-back mode on our openstack (grizzly) nova nodes. By default, nova fires up the rbd volumes with "if=none" mode evidenced by the following cmd line from "ps | grep".

-drive file=rbd:ceph-openstack-volumes/volume-949e2e32-20c7-45cf-b41b-46951c78708b:id=ceph-openstack-volumes:key=12347I9RsEoIDBAAi2t+M6+7zMMZoMM+aasiog==:auth_supported=cephx\;none,if=none,id=drive-virtio-disk0,format=raw,serial=949e2e32-20c7-45cf-b41b-46951c78708b,cache=writeback 

Does anyone know if this should be set to anything else (e.g., if=virtio suggested by some qemu posts in general)? Given that the underline network stack for RBD IO is provided by the linux kenerl instead, does this option bear any relevance for rbd volume performance inside guest VM?

there should be something like "-device virtio-blk-pci,drive=drive-virtio-disk0" in reference to the id= for the drive-specification.

Furthermore to really check rbd_cache there is s/t like:

rbd_cache=true:rbd_cache_size=33554432:rbd_cache_max_dirty=16777216:rbd_cache_target_dirty=8388608

missing in the ":"-list, perhaps after :none:rbd_cache=true:rbd_cache_size=33554432:rbd_cache_max_dirty=16777216:rbd_cache_target_dirty=8388608

cache=writeback is necessary, too.
No idea, though, how to teach openstack to use these parameters, sorry.


Regards,

Oliver.


Thanks. --weiguo




_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux