Oops. I missed the note when I went through the core. Sorry everyone for the incorrect information I provided earlier about libgfapi.
And thank you Niels for bringing up the correct information.
Regarding libvirt parsing the provided XML; according to a table provided under [1], a source of type `gluster` can have only one host.
So it seems, libvirt doesn't support backup volfile servers. I haven't been able to find any information on qemu's support, so I'm assuming it doesn't support backup volfile servers either.
~kaushal
On Wed, Jan 28, 2015 at 3:11 PM, Niels de Vos <ndevos@xxxxxxxxxx> wrote:
On Wed, Jan 28, 2015 at 02:06:32PM +0530, Pranith Kumar Karampuri wrote:
> Added Niels and Shyam who may know about this.
>
> Pranith
> On 01/27/2015 12:25 PM, Arash Shams wrote:
> >
> >is there anyone pay attention to my question ?
> >------------------------------------------------------------------------
> >From: ara4sh@xxxxxxxxxxx
> >To: gluster-users@xxxxxxxxxxx
> >Date: Sun, 25 Jan 2015 08:16:49 +0000
> >Subject: how to Set BackupVol for Libgfapi ??
> >
> >Hello
> >is this possible ??
> > <disk type='network' device='disk'>
> > <driver name='qemu' type='qcow2' cache='none'/>
> > <source protocol='gluster' name='vol1/vms/disks/cmsrm-ui01.qcow2'>
> > <host name='141.108.36.19' port='24007'/>
> > <host name='141.108.36.20' port='24007'/>
> > <host name='141.108.36.21' port='24007'/>
> > <host name='141.108.36.22' port='24007'/>
> > </source>
> > <target dev='vda' bus='virtio'/>
> > <address type='pci' domain='0x0000' bus='0x00' slot='0x05'
> >function='0x0'/>
> > </disk>
> >so when one server goes down my vm dont goes down !!!
This is intepreted by libvirt, and I do not know if the syntax is valid
or how libvirt does this.
libgfapi offers this functionality by calling glfs_set_volfile_server()
a number of times with different servernames/addresses:
https://github.com/gluster/glusterfs/blob/master/api/src/glfs.h#L134
Note that the specified server is only a single-point-of-failure when
the initial connection to the volume is made. Most users seem to use
round-robin-DNS or a virtual IP-address (with some HA/failover solution)
to prevent this SPOF.
After connecting, the client knows about all the servers that are part
of the volume. If one server goes down, others will be used
automatically.
If one brick of a replica goes down, there will be a timeout (by default
42 seconds) until an other brick is tried. Depending on the
configuration of the storage inside the VM, you could run into read-only
filesystems whan such a failover happens.
>From the Gluster side, you could try tuning the network.ping-timeout
option. Or inside the VM you can set a SCSI-timeout per disk (in a sysfs
file like /sys/block/sda/device/timeout), you'll need a udev-rule or
something to make the change premanent.
HTH,
Niels
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-users