Re: [Gluster-users] VM fs becomes read only when one gluster node goes down

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks for the hints guys :-)

I think i will try to use an arbiter. As i use distributed / replicated
volumes i think i have to add 2 arbiters, right?

My nodes have 10GBit interfaces. Would be 1 GBit for the arbiter(s) enough?

Regards
André


Am 28.10.2015 um 14:38 schrieb Diego Remolina:
> I am running Ovirt and self-hosted engine with additional vms on a
> replica two gluster volume. I have an "arbiter" node and set quorum
> ratio to 51%. The arbiter node is just another machine with the
> glusterfs bits installed that is part of the gluster peers but has no
> bricks to it.
> 
> You will have to be very careful where you put these three machines if
> they are going to go in separate server rooms or buildings. There are
> pros and cons to distribution of the nodes and network topology may
> also influence that.
> 
> In my case, this is on a campus, I have machines in 3 separate
> buildings and all machines are on the same main campus router (we have
> more than one main router). All machines connected via 10 gbps. If I
> had one node with bricks and the arbiter in the same building and that
> building went down (power/AC/chill water/network), then the other node
> with bricks would be useless. This is why I have machines in 3
> different buildings. Oh, and this is because most of the client
> systems are not even in the same building as the servers. If my client
> machines and servers where in the same building, then doing one node
> with bricks and arbiter in that same building could make sense.
> 
> HTH,
> 
> Diego
> 
> 
> 
> 
> On Wed, Oct 28, 2015 at 5:25 AM, Niels de Vos <ndevos@xxxxxxxxxx> wrote:
>> On Tue, Oct 27, 2015 at 07:21:35PM +0100, André Bauer wrote:
>>> -----BEGIN PGP SIGNED MESSAGE-----
>>> Hash: SHA256
>>>
>>> Hi Niels,
>>>
>>> my network.ping-timeout was already set to 5 seconds.
>>>
>>> Unfortunately it seems i dont have the timout setting in Ubuntu 14.04
>>> for my vda disk.
>>>
>>> ls -al /sys/block/vda/device/ gives me only:
>>>
>>> drwxr-xr-x 4 root root    0 Oct 26 20:21 ./
>>> drwxr-xr-x 5 root root    0 Oct 26 20:21 ../
>>> drwxr-xr-x 3 root root    0 Oct 26 20:21 block/
>>> - -r--r--r-- 1 root root 4096 Oct 27 18:13 device
>>> lrwxrwxrwx 1 root root    0 Oct 27 18:13 driver ->
>>> ../../../../bus/virtio/drivers/virtio_blk/
>>> - -r--r--r-- 1 root root 4096 Oct 27 18:13 features
>>> - -r--r--r-- 1 root root 4096 Oct 27 18:13 modalias
>>> drwxr-xr-x 2 root root    0 Oct 27 18:13 power/
>>> - -r--r--r-- 1 root root 4096 Oct 27 18:13 status
>>> lrwxrwxrwx 1 root root    0 Oct 26 20:21 subsystem ->
>>> ../../../../bus/virtio/
>>> - -rw-r--r-- 1 root root 4096 Oct 26 20:21 uevent
>>> - -r--r--r-- 1 root root 4096 Oct 26 20:21 vendor
>>>
>>>
>>> Is the qourum setting a problem, if you only have 2 replicas?
>>>
>>> My volume has this quorum options set:
>>>
>>> cluster.quorum-type: auto
>>> cluster.server-quorum-type: server
>>>
>>> As i understand the documentation (
>>> https://access.redhat.com/documentation/en-US/Red_Hat_Storage/2.0/html/A
>>> dministration_Guide/sect-User_Guide-Managing_Volumes-Quorum.html
>>> ), cluster.server-quorum-ratio is set to "< 50%" by default, which can
>>> never happen if you only have 2 replicas and one node goes down, right?
>>>
>>> Do in need cluster.server-quorum-ratio = 50% in this case?
>>
>> Replica 2 for VM storage is troublesome. Sahine just responded very
>> nicely to a very similar email:
>>
>>   http://thread.gmane.org/gmane.comp.file-systems.gluster.user/22818/focus=22823
>>
>> HTH,
>> Niels
>>
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users@xxxxxxxxxxx
>> http://www.gluster.org/mailman/listinfo/gluster-users
> 


-- 
Mit freundlichen Grüßen
André Bauer

MAGIX Software GmbH
André Bauer
Administrator
August-Bebel-Straße 48
01219 Dresden
GERMANY

tel.: 0351 41884875
e-mail: abauer@xxxxxxxxx
abauer@xxxxxxxxx <mailto:Email>
www.magix.com <http://www.magix.com/>

Geschäftsführer | Managing Directors: Dr. Arnd Schröder, Klaus Schmidt
Amtsgericht | Commercial Register: Berlin Charlottenburg, HRB 127205

Find us on:

<http://www.facebook.com/MAGIX> <http://www.twitter.com/magix_de>
<http://www.youtube.com/wwwmagixcom> <http://www.magixmagazin.de>
----------------------------------------------------------------------
The information in this email is intended only for the addressee named
above. Access to this email by anyone else is unauthorized. If you are
not the intended recipient of this message any disclosure, copying,
distribution or any action taken in reliance on it is prohibited and
may be unlawful. MAGIX does not warrant that any attachments are free
from viruses or other defects and accepts no liability for any losses
resulting from infected email transmissions. Please note that any
views expressed in this email may be those of the originator and do
not necessarily represent the agenda of the company.
----------------------------------------------------------------------
_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-devel




[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux