Re: [Gluster-devel] VM fs becomes read only when one gluster node goes down

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I am running Ovirt and self-hosted engine with additional vms on a
replica two gluster volume. I have an "arbiter" node and set quorum
ratio to 51%. The arbiter node is just another machine with the
glusterfs bits installed that is part of the gluster peers but has no
bricks to it.

You will have to be very careful where you put these three machines if
they are going to go in separate server rooms or buildings. There are
pros and cons to distribution of the nodes and network topology may
also influence that.

In my case, this is on a campus, I have machines in 3 separate
buildings and all machines are on the same main campus router (we have
more than one main router). All machines connected via 10 gbps. If I
had one node with bricks and the arbiter in the same building and that
building went down (power/AC/chill water/network), then the other node
with bricks would be useless. This is why I have machines in 3
different buildings. Oh, and this is because most of the client
systems are not even in the same building as the servers. If my client
machines and servers where in the same building, then doing one node
with bricks and arbiter in that same building could make sense.

HTH,

Diego




On Wed, Oct 28, 2015 at 5:25 AM, Niels de Vos <ndevos@xxxxxxxxxx> wrote:
> On Tue, Oct 27, 2015 at 07:21:35PM +0100, André Bauer wrote:
>> -----BEGIN PGP SIGNED MESSAGE-----
>> Hash: SHA256
>>
>> Hi Niels,
>>
>> my network.ping-timeout was already set to 5 seconds.
>>
>> Unfortunately it seems i dont have the timout setting in Ubuntu 14.04
>> for my vda disk.
>>
>> ls -al /sys/block/vda/device/ gives me only:
>>
>> drwxr-xr-x 4 root root    0 Oct 26 20:21 ./
>> drwxr-xr-x 5 root root    0 Oct 26 20:21 ../
>> drwxr-xr-x 3 root root    0 Oct 26 20:21 block/
>> - -r--r--r-- 1 root root 4096 Oct 27 18:13 device
>> lrwxrwxrwx 1 root root    0 Oct 27 18:13 driver ->
>> ../../../../bus/virtio/drivers/virtio_blk/
>> - -r--r--r-- 1 root root 4096 Oct 27 18:13 features
>> - -r--r--r-- 1 root root 4096 Oct 27 18:13 modalias
>> drwxr-xr-x 2 root root    0 Oct 27 18:13 power/
>> - -r--r--r-- 1 root root 4096 Oct 27 18:13 status
>> lrwxrwxrwx 1 root root    0 Oct 26 20:21 subsystem ->
>> ../../../../bus/virtio/
>> - -rw-r--r-- 1 root root 4096 Oct 26 20:21 uevent
>> - -r--r--r-- 1 root root 4096 Oct 26 20:21 vendor
>>
>>
>> Is the qourum setting a problem, if you only have 2 replicas?
>>
>> My volume has this quorum options set:
>>
>> cluster.quorum-type: auto
>> cluster.server-quorum-type: server
>>
>> As i understand the documentation (
>> https://access.redhat.com/documentation/en-US/Red_Hat_Storage/2.0/html/A
>> dministration_Guide/sect-User_Guide-Managing_Volumes-Quorum.html
>> ), cluster.server-quorum-ratio is set to "< 50%" by default, which can
>> never happen if you only have 2 replicas and one node goes down, right?
>>
>> Do in need cluster.server-quorum-ratio = 50% in this case?
>
> Replica 2 for VM storage is troublesome. Sahine just responded very
> nicely to a very similar email:
>
>   http://thread.gmane.org/gmane.comp.file-systems.gluster.user/22818/focus=22823
>
> HTH,
> Niels
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users@xxxxxxxxxxx
> http://www.gluster.org/mailman/listinfo/gluster-users
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users




[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux