On 09/01/2015 09:21 AM, Atin Mukherjee
wrote:
-Atin
Sent from one plus one
On Sep 1, 2015 9:39 PM, "Joe Julian" <joe@xxxxxxxxxxxxxxxx>
wrote:
>
>
>
> On 09/01/2015 02:59 AM, Atin Mukherjee wrote:
>>
>>
>> On 09/01/2015 02:34 PM, Joe Julian wrote:
>>>
>>>
>>> On 08/31/2015 09:03 PM, Atin Mukherjee wrote:
>>>>
>>>> On 09/01/2015 01:00 AM, Merlin Morgenstern
wrote:
>>>>>
>>>>> this all makes sense and sounds a bit like
a solr setup :-)
>>>>>
>>>>> I have now added the third node as a peer
>>>>> sudo gluster peer probe gs3
>>>>>
>>>>> That indeed allow me to mount the share
manually on node2 even if
>>>>> node1 is
>>>>> down.
>>>>>
>>>>> BUT: It does not mount on reboot! It only
successfully mounts if
>>>>> node1 is
>>>>> up. I need to do a manual: sudo mount -a
>>>>
>>>> You would need to ensure that at least two of
the nodes are up in the
>>>> cluster in this case.
>>>
>>> Atin, why? I've never had that restriction.
>>>
>>> It sounds to me like the mount's trying to happen
before any bricks are
>>> available and/or glusterd is listening.
>>
>> In a 3 node cluster if two of the nodes are already
down and the other
>> is rebooted then the GlusterD instance wouldn't start
the brick process
>> until it receives the first handshake from one of its
peer and for that
>> you would need your 2nd node to be up as well. This why
its recommended
>> to have a 3rd dummy node (without any bricks) added to
a existing 2 node
>> cluster.
>
>
> Again I ask, is this a departure from prior behavior?
Not really, its there from long time. IIRC, this change was done
when quorum feature was introduced. KP can correct me.
Right, so unless server quorum is enabled, this shouldn't be the
problem.
>
>
>>>>> Is there a particular reason for this, or
is it a misconfiguration?
>>>>>
>>>>>
>>>>>
>>>>> 2015-08-31 21:01 GMT+02:00 Joe Julian <joe@xxxxxxxxxxxxxxxx>:
>>>>>
>>>>>> On 08/31/2015 10:41 AM, Vijay Bellur
wrote:
>>>>>>
>>>>>>> On Monday 31 August 2015 10:42 PM,
Atin Mukherjee wrote:
>>>>>>>
>>>>>>>> > 2. Server2 dies.
Server1 has to reboot.
>>>>>>>> >
>>>>>>>> > In this case the
service stays down. It is inpossible to
>>>>>>>> remount the
>>>>>>>> share without Server1. This is
not acceptable for a High Availability
>>>>>>>> System and I believe also not
intended, but a misconfiguration or
>>>>>>>> bug.
>>>>>>>> This is exactly what I gave as
an example in the thread (please read
>>>>>>>> again). GlusterD is not
supposed to start brick process if its other
>>>>>>>> counter part hasn't come up yet
in a 2 node setup. The reason it has
>>>>>>>> been designed in this way is to
block GlusterD on operating on a
>>>>>>>> volume
>>>>>>>> which could be stale as the
node was down and cluster was operational
>>>>>>>> earlier.
>>>>>>>>
>>>>>>> For two node deployments, a third
dummy node is recommended to ensure
>>>>>>> that quorum is maintained when one
of the nodes is down.
>>>>>>>
>>>>>>> Regards,
>>>>>>> Vijay
>>>>>>>
>>>>>> Have the settings changed to enable
server quorum by default?
>>>>>>
>>>>>>
_______________________________________________
>>>>>> Gluster-users mailing list
>>>>>> Gluster-users@xxxxxxxxxxx
>>>>>> http://www.gluster.org/mailman/listinfo/gluster-users
>>>>>>
>>>>>
>>>>>
_______________________________________________
>>>>> Gluster-users mailing list
>>>>> Gluster-users@xxxxxxxxxxx
>>>>> http://www.gluster.org/mailman/listinfo/gluster-users
>>>>>
>>>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users@xxxxxxxxxxx
> http://www.gluster.org/mailman/listinfo/gluster-users
|
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users