Re: set: failed: Quorum not met. Volume operation not allowed. SUCCESS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

You had server-quorum enabled which could be the cause of the errors
you were getting at the first place. In latest releases only
client-quorum is enabled and the server-quorum is disabled by default.
Yes, the order matters in such cases.

Regards,
Karthik

On Fri, Aug 28, 2020 at 2:37 AM WK <wkmail@xxxxxxxxx> wrote:
>
> So success!
>
> I dont know why but when I set "server-quorum-type" to none FIRST it
> seemed to work without complaining about quorum.
>
> then quorum-type was able to be set to none as well
>
>    gluster volume set VOL cluster.server-quorum-type none
>    gluster volume set VOL cluster.quorum-type none
>
> Finally I used Karthik's remove-brick command and it worked this time
> and I am now copying off the needed image.
>
> So I guess order counts.
>
> Thanks.
>
> -wk
>
>
>
> On 8/27/2020 12:47 PM, WK wrote:
> > No Luck.  Same problem.
> >
> > I stopped the volume.
> >
> > I ran the remove-brick command. It warned about not being able to
> > migrate files from removed bricks and asked if I want to continue.
> >
> > when I say 'yes'
> >
> > Gluster responds with 'failed: Quorum not met Volume operation not
> > allowed'
> >
> >
> > -wk
> >
> > On 8/26/2020 9:28 PM, Karthik Subrahmanya wrote:
> >> Hi,
> >>
> >> Since your two nodes are scrapped and there is no chance that they
> >> will come back in later time, you can try reducing the replica count
> >> to 1 by removing the down bricks from the volume and then mounting the
> >> volume back to access the data which is available on the only up
> >> brick.
> >> The remove brick command looks like this:
> >>
> >> gluster volume remove-brick VOLNAME replica 1
> >> <ip-of-the-first-node-down>:/brick-path
> >> <ip-of-the-second-node-down>:/brick-path force
> >>
> >> Regards,
> >> Karthik
> >>
> >>
> >> On Thu, Aug 27, 2020 at 4:24 AM WK <wkmail@xxxxxxxxx> wrote:
> >>> So we migrated a number of VMs from a small Gluster 2+1A volume to a
> >>> newer cluster.
> >>>
> >>> Then a few days later the client said he wanted an old forgotten
> >>> file that had been left behind on the the deprecated system.
> >>>
> >>> However the arbiter and one of the brick nodes had been scraped,
> >>> leaving only a single gluster node.
> >>>
> >>> The volume I need uses shards so I am not excited about having to
> >>> piece it back together.
> >>>
> >>> I powered it up the single node and tried to mount the volume and of
> >>> course it refused to mount due to quorum and gluster volume status
> >>> shows the volume offline
> >>>
> >>> In the past I had worked around this issue by disabling quorum, but
> >>> that was years ago, so I googled it and found list messages
> >>> suggesting the following:
> >>>
> >>>   gluster volume set VOL cluster.quorum-type none
> >>>   gluster volume set VOL cluster.server-quorum-type none
> >>>
> >>> However, the gluster 6.9 system refuses to accept those set commands
> >>> due to the quorum and spits out the set failed error.
> >>>
> >>> So in modern Gluster, what is the preferred method for starting and
> >>> mounting a  single node/volume that was once part of a actual 3 node
> >>> cluster?
> >>>
> >>> Thanks.
> >>>
> >>> -wk
> >>>
> >>>
> >>> ________
> >>>
> >>>
> >>>
> >>> Community Meeting Calendar:
> >>>
> >>> Schedule -
> >>> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> >>> Bridge: https://bluejeans.com/441850968
> >>>
> >>> Gluster-users mailing list
> >>> Gluster-users@xxxxxxxxxxx
> >>> https://lists.gluster.org/mailman/listinfo/gluster-users
> > ________
> >
> >
> >
> > Community Meeting Calendar:
> >
> > Schedule -
> > Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> > Bridge: https://bluejeans.com/441850968
> >
> > Gluster-users mailing list
> > Gluster-users@xxxxxxxxxxx
> > https://lists.gluster.org/mailman/listinfo/gluster-users
>

________



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users



[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux