What has happened is here is one of the node acked negative which lead to an inconsistent state as GlusterD doesn't have transaction rollback mechanism. This is why subsequent commands on the volume failed.
We'd need to see why the other node didn't behave correctly. What error was thrown at CLI when volume start failed. Could you attach glusterd & cmd_history.log files from both the nodes?
-Atin
Sent from one plus one
On Oct 10, 2015 9:35 PM, "Mauro M." <gluster@xxxxxxxxxxxx> wrote:
>
> Hello,
>
> Today I received the update to 3.7.5 and since the update I began to have
> serious issues. My cluster has two bricks with replication.
>
> With both bricks up I could not start the volume that was stopped soon
> after the update. By taking one of the nodes down I managed finally to
> start the volume, but ... with the following error:
>
> [2015-10-10 09:40:59.600974] E [MSGID: 106123]
> [glusterd-syncop.c:1404:gd_commit_op_phase] 0-management: Commit of
> operation 'Volume Start' failed on localhost
>
> At which point clients could mount the filesystem, however with:
> # gluster volume status
> it showed the volume as stopped.
>
> If I stopped and started again the volume same problem, but, if I issued
> again a "volume start myvolume" at this point it would show as started!
>
> With both bricks up and running instead there is no way to start the
> volume once stopped. Only if I take one of the bricks down then I can
> start it with the procedure above.
>
> I am downgrading to 3.7.4.
>
> If you have not yet upgraded, BEWARE!
>
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users@xxxxxxxxxxx
> http://www.gluster.org/mailman/listinfo/gluster-users
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-users