Re: Gluster 3.5 problems with libgfapi/qemu

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 06/12/2014 07:47 PM, Ivano Talamo wrote:
On 6/12/14 3:44 PM, Vijay Bellur wrote:
On 06/11/2014 11:25 PM, Ivano Talamo wrote:
Hello,
I recently update 2 servers (Scientific Linux 6) with a replicate volume
from gluster 3.4 to 3.5.0-2.
The volume was previously used to host qemu/kvm VM images accessed via a
fuse-mounted mount-point.
Now I would like to use the libgfapi but I'm seeing this error:

[root@cmsrm-service02 ~]# qemu-img info
gluster://cmsrm-service02/vol1/vms/disks/cmsrm-ui01.raw2
[2014-06-11 17:47:22.084842] E [afr-common.c:3959:afr_notify]
0-vol1-replicate-0: All subvolumes are down. Going offline until atleast
one of them comes back up.
image: gluster://cmsrm-service03/vol1/vms/disks/cmsrm-ui01.raw2
file format: raw
virtual size: 20G (21474836480 bytes)
disk size: 4.7G
[2014-06-11 17:47:22.318034] E [afr-common.c:3959:afr_notify]
0-vol1-replicate-0: All subvolumes are down. Going offline until atleast
one of them comes back up.


This is a benign error message. qemu-img initializes a glusterfs graph
through libgfapi, performs the operation and then cleans up the graph.
afr translator in glusterfs displays this log message as part of a
graph cleanup operation. IIRC, qemu displays all log messages on
stderr by default and hence this message is seen.

I thought this indeed, since the volume was fine after that and I could
access it.

The error message does not appear if I access the file via the
mount-point.


There should be no functional problem even if this message is seen.


Btw, I have problems starting the VM. the "virsh start <vm-name>"
command remains waiting forever on a futex call.
The xml relevant section is:
     <disk type='network' device='disk'>
       <driver name='qemu' type='qcow2' cache='none'/>
       <source protocol='gluster' name='vol1/vms/disks/cmsrm-ui01.qcow2'>
         <host name='141.108.36.19' port='24007'/>
       </source>
       <target dev='vda' bus='virtio'/>
       <address type='pci' domain='0x0000' bus='0x00' slot='0x05'
function='0x0'/>
     </disk>


Do you happen to notice any errors in the glusterd and glusterfsd logs?

-Vijay

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://supercolony.gluster.org/mailman/listinfo/gluster-users




[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux