Hi,
a fix-patch is available for proxmox pve-qemu-kvm.
So if someone somehow speaks to qemu devs, please show them this one.
2014-08-26 13:23 GMT+03:00 Roman <romeo.r@xxxxxxxxx>:
Nevermind. Its proxmox and qemu mounting issue.If someone can help, please, write here:Need to implementjust reading /usr/include/glusterfs/api/glfs.h, function glfs_set_volfile_server():
So it should be possible to pass 2 servers - someone just needs to implement that with qemu ...NOTE: This API is special, multiple calls to this function with different
volfile servers, port or transport-type would create a list of volfile
servers which would be polled during `volfile_fetch_attempts()`
2014-08-26 9:49 GMT+03:00 Roman <romeo.r@xxxxxxxxx>:Hi,I'm using Proxmox for qemu hosts. I'm in situation, where I don't understand something :)when I add gluster storage from Proxmox web GUI, I can install qemu guests without any caching. The mount -l shows me this output:stor1:HA-MED-PVE1-1T on /mnt/pve/HA-MED-PVE1-1T type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
But in this case, if 1 server is down (in case with HA brick), proxmox won't be able to read configuration from the 1st nor second server (Proxmox only knows about where it should failover for current running guests) and does not able to create new machines using this kind of mount.If I add mount line like this in fstab:stor1:HA-MED-PVE1-1T /mnt/pve/HA-MED-PVE1-1T glusterfs defaults,default_permissions,backupvolfile-server=stor2,direct-io-mode=enable,allow_other,max_read=131072 0 0with mount -l now:stor1:HA-MED-PVE1-1T on /mnt/pve/HA-MED-PVE1-1T type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)I'm getting this error, when trying to start guests:kvm: -drive file=/mnt/pve/HA-MED-PVE1-1T/images/125/vm-125-disk-1.qcow2,if=none,id=drive-virtio0,format=qcow2,aio=native,cache=none: file system may not support O_DIRECT
What could be the difference?--
Best regards,
Roman.--
Best regards,
Roman.
Best regards,
Roman.
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://supercolony.gluster.org/mailman/listinfo/gluster-users