The easy solution to this is to create a really tiny image in glance (call it fake_image or something like that) and tell nova that it is the image you are using. Since you are booting from the RBD anyway, it doesn't actually use the image for anything, and should only put a single copy of it in the _base directory.
On Wed, Aug 21, 2013 at 8:06 AM, Sébastien Han <sebastien.han@xxxxxxxxxxxx> wrote:
Do you use Xen or KVM?It seems that Xen as a flag called: cache_images=all. However I haven't seen anything for KVM.––––Sébastien HanCloud Engineer"Always give 100%. Unless you're giving blood."
Phone: +33 (0)1 49 70 99 72 -Mobile: +33 (0)6 52 84 44 70Address : 10, rue de la Victoire - 75009 ParisWeb : www.enovance.com - Twitter : @enovanceOn August 20, 2013 at 10:07:56 PM, w sun (wsun2@xxxxxxxxxxx) wrote:
This might be slightly off topic though many of ceph users might have run into similar issues.For one of our Grizzly Openstack environment, we are using Ceph/RBD as the exclusive image and volume storage for VMs, which are booting from rbd backed Cinder volumes. As a result, nova image cache is being used at all. For some reasons, nova still creates image cache under /var/lib/nova/_base on nova nodes. This fills up our shared /var/lib/nova/instances (with NFS) directory. This NFS share has limited size (50GB) which is used to store config drive and enable VM failover/restart during HW failure.Does anyone know how to disable nova image caching function completely? Or suggestion to best deal with this issue? We know we can do aggressive clearing with some of the nova image cache management configuration but that doesn't help reducing the extra IO overhead of caching image on the nova nodes.Thanks. --weiguo
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com