OpenStack doesn't know how to set different caching options for attached block device.
This might be implemented for Havana.
Cheers. –––– Sébastien Han Cloud Engineer "Always give 100%. Unless you're giving blood."
Phone : +33 (0)1 49 70 99 72 – Mobile : +33 (0)6 52 84 44 70 Address : 10, rue de la Victoire – 75009 Paris
On Jun 11, 2013, at 7:43 PM, Oliver Francke < Oliver.Francke@xxxxxxxx> wrote: Hi,
Am 11.06.2013 um 19:14 schrieb w sun <wsun2@xxxxxxxxxxx>:
Hi,
We are currently testing the performance with rbd caching enabled with write-back mode on our openstack (grizzly) nova nodes. By default, nova fires up the rbd volumes with "if=none" mode evidenced by the following cmd line from "ps | grep".
-drive file=rbd:ceph-openstack-volumes/volume-949e2e32-20c7-45cf-b41b-46951c78708b:id=ceph-openstack-volumes:key=12347I9RsEoIDBAAi2t+M6+7zMMZoMM+aasiog==:auth_supported=cephx\;none,if=none,id=drive-virtio-disk0,format=raw,serial=949e2e32-20c7-45cf-b41b-46951c78708b,cache=writeback
Does anyone know if this should be set to anything else (e.g., if=virtio suggested by some qemu posts in general)? Given that the underline network stack for RBD IO is provided by the linux kenerl instead, does this option bear any relevance for rbd volume performance inside guest VM?
there should be something like "-device virtio-blk-pci,drive=drive-virtio-disk0" in reference to the id= for the drive-specification.
Furthermore to really check rbd_cache there is s/t like:
rbd_cache=true:rbd_cache_size=33554432:rbd_cache_max_dirty=16777216:rbd_cache_target_dirty=8388608
missing in the ":"-list, perhaps after :none:rbd_cache=true:rbd_cache_size=33554432:rbd_cache_max_dirty=16777216:rbd_cache_target_dirty=8388608
cache=writeback is necessary, too. No idea, though, how to teach openstack to use these parameters, sorry.
Regards,
Oliver.
Thanks. --weiguo
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
|
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com