Hello, I'm seeing a weird issue with OpenStack and Gluster. I have /var/lib/nova/instances mounted as a glusterfs volume. The owner of /var/lib/nova/instances is nova:nova. When I launch a vm and watch it launching, I see the following: root at c01:/var/lib/nova/instances/instance-00000012# ls -l total 8 -rw-rw---- 1 nova nova 0 Aug 24 14:22 console.log -rw-rw-r-- 1 nova nova 1459 Aug 24 14:22 libvirt.xml This is correct. Then it changes ownership to libvirt-qemu: root at c01:/var/lib/nova/instances/instance-00000012# ls -l total 22556 -rw-rw---- 1 libvirt-qemu kvm 0 Aug 24 14:22 console.log -rw-r--r-- 1 libvirt-qemu kvm 27262976 Aug 24 14:22 disk -rw-rw-r-- 1 nova nova 1459 Aug 24 14:22 libvirt.xm Again, this is correct. But then it changes to root: root at c01:/var/lib/nova/instances/instance-00000012# ls -l total 22556 -rw-rw---- 1 root root 0 Aug 24 14:22 console.log -rw-r--r-- 1 root root 27262976 Aug 24 14:22 disk -rw-rw-r-- 1 nova nova 1459 Aug 24 14:22 libvirt.xml OpenStack then errors out due to not being able to correctly access the files. If I remove the /var/lib/nova/instances mount and just use the normal filesystem, the root ownership part does not happen. I have successfully had Gluster working with OpenStack in this way on a different installation, so I'm not sure why I'm seeing this issue now. Any ideas? Thanks, Joe -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://gluster.org/pipermail/gluster-users/attachments/20120824/29ad1a0a/attachment.htm>