Re: gluster processes won't start when a single node is booted

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Atin,

Thank you for your kind reply.

For a "brick" down I mean the server running the gluster brick service
completely turned off.

I can circumvent the issue by stopping and starting again the volume as in
my previous message. However it does not sound right as it either:
- should not be necessary if it is supposed to work as in v3.5
- should not work either if your hypothesis is correct

Mauro

On Sun, September 20, 2015 14:54, Atin Mukherjee wrote:
> When you say a brick is down do you mean the glusterd instance on that
> node
> is also down? If that's the case I stand by with my point. If you meant
> only glusterfsd process was not running then could you check whether you
> have enabled quorum by chance?
>
> -Atin
> Sent from one plus one
> On Sep 20, 2015 7:20 PM, "Mauro M." <gluster@xxxxxxxxxxxx> wrote:
>
>> It does not sound right.
>>
>> Further to my previous tests, if I stop and start the volume whilst
>> brick2
>> is down then the glusterfs processes do start and I am able to mount the
>> volume.
>>
>> From the status when processes on brick1 are N/A on TCP port I execute:
>>
>> # gluster volume stop gv_home
>> # gluster volume start gv_home
>>
>> at this point the status shows processes listening on TCP port and thus
>> I
>> am able to mount the volume.
>>
>> I inserted these steps in /etc/rc.local where know I test if /home is
>> mounted and if not I stop and start the volume, then mount.
>>
>> It works to my satisfaction, however why do I have to go through this
>> workaround when 3.5 worked flawlessly?
>>
>> Mauro
>>
>> On Sun, September 20, 2015 12:50, Atin Mukherjee wrote:
>> > This behaviour is expected and in a 2 node cluster setup brick
>> processes
>> are started only when the other nodes come up. The same would be true in
>> 3.5.x series as well. Adding a dummy node into the cluster would help
>> you
>> > to solve your problem.
>> > -Atin
>> > Sent from one plus one
>> > On Sep 20, 2015 4:56 PM, "Mauro M." <gluster@xxxxxxxxxxxx> wrote:
>> >> Hi all,
>> >> I hope you might help.
>> >> I just upgraded from 3.5.6 to 3.7.4
>> >> My configuration is 1 volume with 2 x bricks replicated.
>> >> Normally I have brick1 running and brick2 turned off so that when I
>> want
>> >> to do maintenance on brick1 I turn on brick2, wait for
>> synchronization
>> to
>> >> complete and turn off brick1.
>> >> Often I just reboot brick1 with brick2 still turned off.
>> >> With glusterfs version 3.5 I could do all of the above.
>> >> After the upgrade to 3.7.4 if I boot brick1 (or brick2) without the
>> other
>> >> node, glusterd starts, but the gluster network processes won't start.
>> Here is the output of gluster volume info:
>> >> Volume Name: gv_home
>> >> Type: Replicate
>> >> Volume ID: ef806153-2a02-4db9-a54e-c2f89f79b52e
>> >> Status: Started
>> >> Number of Bricks: 1 x 2 = 2
>> >> Transport-type: tcp
>> >> Bricks:
>> >> Brick1: brick1:/brick0/gv_home
>> >> Brick2: brick2:/brick0/gv_home
>> >> Options Reconfigured:
>> >> nfs.disable: on
>> >> config.transport: tcp
>> >> ... and gluster volume status:
>> >> Status of volume: gv_home
>> >> Gluster process                         TCP Port  RDMA Port  Online
>> Pid
>> >>
>> ---------------------------------------------------------------------------
>> Brick brick1:/brick0/gv_home            N/A       N/A        N
>> N/A
>> >> NFS Server on localhost                 N/A       N/A        N
>> N/A
>> >> Task Status of Volume gv_home
>> >>
>> ---------------------------------------------------------------------------
>> There are no active volume tasks
>> >> Under this condition gv_home cannot be mounted.
>> >> Only if I start brick2, once glusterd starts on brick2 the gluster
>> processes also start on brick1 and gv_home can be mounted:
>> >> Status of volume: gv_home
>> >> Gluster process                         TCP Port  RDMA Port  Online
>> Pid
>> >>
>> --------------------------------------------------------------------------
>> Brick brick1:/brick0/gv_home            49158     0          Y
>> 30049
>> >> Brick brick2:/brick0/gv_home            49158     0          Y
>>  14797
>> >> Self-heal Daemon on localhost           N/A       N/A        Y
>>  30044
>> >> Self-heal Daemon on brick2              N/A       N/A        Y
>>  14792
>> >> Task Status of Volume gv_home
>> >>
>> --------------------------------------------------------------------------
>> There are no active volume tasks
>> >> Once I turn off brick2 then the volume remains available and mounted
>> without issues (for as long that the relative gluster processes remain
>> active, if I kill them I am back without volume).
>> >> The issue is that I would like to safely boot one the bricks without
>> having to boot both to get the volume back and mountable which is what
>> I
>> >> was able to do with glusterfs version 3.5.
>> >> Please could you help?
>> >> Is there any parameter to set that would enable the same behaviour as
>> in
>> >> 3.5?
>> >> Thank you in advance,
>> >> Mauro
>> >> _______________________________________________
>> >> Gluster-users mailing list
>> >> Gluster-users@xxxxxxxxxxx
>> >> http://www.gluster.org/mailman/listinfo/gluster-users
>>
>>
>>
>>
>>
>>
>


-- 


_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users



[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux