Re: How-to start gluster when only one node is up ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Now i understand it, thanks

/Peter

-----Oprindelig meddelelse-----
Fra: Diego Remolina [mailto:dijuremo@xxxxxxxxx] 
Sendt: 30. oktober 2015 14:11
Til: Peter Michael Calum
Cc: gluster-users@xxxxxxxxxxx
Emne: Re:  How-to start gluster when only one node is up ?

Install and configure gluster, make sure the firewall openings are
proper in all nodes and then run:

gluster peer probe (new node IP)

Then you should see it on a gluster volume status [volname]

# gluster v status export
Status of volume: export
Gluster process                                         Port    Online  Pid
------------------------------------------------------------------------------
Brick 10.0.1.7:/bricks/hdds/brick                       49152   Y       21068
Brick 10.0.1.6:/bricks/hdds/brick                       49154   Y       21197
Self-heal Daemon on localhost                           N/A     Y       21217
Self-heal Daemon on 10.0.1.7                            N/A     Y       21087
Self-heal Daemon on 10.0.1.5                            N/A     Y       12956

Task Status of Volume export
------------------------------------------------------------------------------
There are no active volume tasks

As you can see above, I have two nodes with bricks, 10.0.1.6 and
10.0.1.7 and the third dummy node with no bricks is 10.0.1.5

HTH,

Diego


On Fri, Oct 30, 2015 at 8:54 AM, Peter Michael Calum <pemca@xxxxxx> wrote:
> Hi,
>
>
>
> How do i  add a ’dummy’ node ?
>
>
>
> thanks,
>
> Peter
>
>
>
> Fra: gluster-users-bounces@xxxxxxxxxxx
> [mailto:gluster-users-bounces@xxxxxxxxxxx] På vegne af Atin Mukherjee
> Sendt: 30. oktober 2015 13:14
> Til: Mauro Mozzarelli
> Cc: gluster-users@xxxxxxxxxxx
> Emne: Re:  How-to start gluster when only one node is up ?
>
>
>
> -Atin
> Sent from one plus one
> On Oct 30, 2015 5:28 PM, "Mauro Mozzarelli" <mauro@xxxxxxxxxxxx> wrote:
>>
>> Hi,
>>
>> Atin keeps giving the same answer: "it is by design"
>>
>> I keep saying "the design is wrong and it should be changed to cater for
>> standby servers"
> Every design has got its own set of limitations and i would say this is a
> limitation instead of mentioning the overall design itself wrong. I would
> again stand with my points as correctness is always a priority in a
> distributed system. This behavioural change was introduced in 3.5 and if
> this was not included as part of release note I apologize on behalf of the
> release management.
> As communicated earlier, we will definitely resolve this issue in GlusterD2.
>>
>> In the meantime this is the workaround I am using:
>> When the single node starts I stop and start the volume, and then it
>> becomes mountable. On CentOS 6 and CentOS 7 it works with release up to
>> 3.7.4. Release 3.7.5 is broken so I reverted back to 3.7.4.
> This is where I am not convinced. An explicit volume start should start the
> bricks, can you raise a BZ with all the relevant details?
>>
>> In my experience glusterfs releases are a bit of a hit and miss. Often
>> something stops working with newer releases, then after a few more
>> releases it works again or there is a workaround ... Not quite the
>> stability one would want for commercial use, and thus at the moment I can
>> risk using it only for my home servers, hence the cluster with a node
>> always ON and the second as STANDBY.
>>
>> MOUNT=/home
>> LABEL="GlusterFS:"
>> if grep -qs $MOUNT /proc/mounts; then
>>     echo "$LABEL $MOUNT is mounted";
>>     gluster volume start gv_home 2>/dev/null
>> else
>>     echo "$LABEL $MOUNT is NOT mounted";
>>     echo "$LABEL Restarting gluster volume ..."
>>     yes|gluster volume stop gv_home > /dev/null
>>     gluster volume start gv_home
>>     mount -t glusterfs sirius-ib:/gv_home $MOUNT;
>>     if grep -qs $MOUNT /proc/mounts; then
>>         echo "$LABEL $MOUNT is mounted";
>>         gluster volume start gv_home 2>/dev/null
>>     else
>>         echo "$LABEL failure to mount $MOUNT";
>>     fi
>> fi
>>
>> I hope this helps.
>> Mauro
>>
>> On Fri, October 30, 2015 11:48, Atin Mukherjee wrote:
>> > -Atin
>> > Sent from one plus one
>> > On Oct 30, 2015 4:35 PM, "Remi Serrano" <rserrano@xxxxxxxx> wrote:
>> >>
>> >> Hello,
>> >>
>> >>
>> >>
>> >> I setup a gluster file cluster with 2 nodes. It works fine.
>> >>
>> >> But, when I shut down the 2 nodes, and startup only one node, I cannot
>> > mount the share :
>> >>
>> >>
>> >>
>> >> [root@xxx ~]#  mount -t glusterfs 10.32.0.11:/gv0 /glusterLocalShare
>> >>
>> >> Mount failed. Please check the log file for more details.
>> >>
>> >>
>> >>
>> >> Log says :
>> >>
>> >> [2015-10-30 10:33:26.147003] I [MSGID: 100030] [glusterfsd.c:2318:main]
>> > 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.7.5
>> > (args: /usr/sbin/glusterfs -127.0.0.1 --volfile-id=/gv0
>> > /glusterLocalShare)
>> >>
>> >> [2015-10-30 10:33:26.171964] I [MSGID: 101190]
>> > [event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started thread
>> > with index 1
>> >>
>> >> [2015-10-30 10:33:26.185685] I [MSGID: 101190]
>> > [event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started thread
>> > with index 2
>> >>
>> >> [2015-10-30 10:33:26.186972] I [MSGID: 114020] [client.c:2118:notify]
>> > 0-gv0-client-0: parent translators are ready, attempting connect on
>> > transport
>> >>
>> >> [2015-10-30 10:33:26.191823] I [MSGID: 114020] [client.c:2118:notify]
>> > 0-gv0-client-1: parent translators are ready, attempting connect on
>> > transport
>> >>
>> >> [2015-10-30 10:33:26.192209] E [MSGID: 114058]
>> > [client-handshake.c:1524:client_query_portmap_cbk] 0-gv0-client-0:
>> > failed
>> > to get the port number for remote subvolume. Please ume status' on
>> > server
>> > to see if brick process is running.
>> >>
>> >> [2015-10-30 10:33:26.192339] I [MSGID: 114018]
>> > [client.c:2042:client_rpc_notify] 0-gv0-client-0: disconnected from
>> > gv0-client-0. Client process will keep trying to connect t brick's port
>> > is
>> > available
>> >>
>> >>
>> >>
>> >> And when I check the volumes I get:
>> >>
>> >> [root@xxx ~]# gluster volume status
>> >>
>> >> Status of volume: gv0
>> >>
>> >> Gluster process                             TCP Port  RDMA Port  Online
>> > Pid
>> >>
>> >>
>> >
>> > ------------------------------------------------------------------------------
>> >>
>> >> Brick 10.32.0.11:/glusterBrick1/gv0         N/A       N/A        N
>> > N/A
>> >>
>> >> NFS Server on localhost                     N/A       N/A        N
>> > N/A
>> >>
>> >> NFS Server on localhost                     N/A       N/A        N
>> > N/A
>> >>
>> >>
>> >>
>> >> Task Status of Volume gv0
>> >>
>> >>
>> >
>> > ------------------------------------------------------------------------------
>> >>
>> >> There are no active volume tasks
>> >>
>> >>
>> >>
>> >> If I start th second node, all is OK.
>> >>
>> >>
>> >>
>> >> Is this normal ?
>> > This behaviour is by design. In a multi node cluster when GlusterD comes
>> > up
>> > it doesn't start the bricks until it receives the configuration from its
>> > one of the friends to ensure that stale information is not been
>> > referred.
>> > In your case since the other node is down bricks are not started and
>> > hence
>> > mount fails.
>> > As a workaround, we recommend to add a dummy node to the cluster to
>> > avoid
>> > this issue.
>> >>
>> >>
>> >>
>> >> Regards,
>> >>
>> >>
>> >>
>> >> Rémi
>> >>
>> >>
>> >>
>> >>
>> >> _______________________________________________
>> >> Gluster-users mailing list
>> >> Gluster-users@xxxxxxxxxxx
>> >> http://www.gluster.org/mailman/listinfo/gluster-users
>> > _______________________________________________
>> > Gluster-users mailing list
>> > Gluster-users@xxxxxxxxxxx
>> > http://www.gluster.org/mailman/listinfo/gluster-users
>>
>>
>> --
>> Mauro Mozzarelli
>> Phone: +44 7941 727378
>> eMail: mauro@xxxxxxxxxxxx
>>
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users@xxxxxxxxxxx
> http://www.gluster.org/mailman/listinfo/gluster-users
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users




[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux