Hi Sunny,
Thank you for the quick response.
It's not clear to me however if the fix has been already released or not.
The bug status is CLOSED NEXTRELEASE and according to [1] the
NEXTRELEASE resolution means that the fix will be included in the next
supported release. The bug is logged against the mainline version
though, so I'm not sure what this means exactly.
From the 6.4[2] and 6.5[3] release notes it seems it hasn't been
released yet.
Ideally I would not like to patch my systems locally, so if you have an
ETA on when this will be out officially I would really appreciate it.
Links:
[1] https://bugzilla.redhat.com/page.cgi?id=fields.html#bug_status
[2] https://docs.gluster.org/en/latest/release-notes/6.4/
[3] https://docs.gluster.org/en/latest/release-notes/6.5/
Thank you!
Best regards,
alexander iliev
On 8/30/19 9:22 AM, Sunny Kumar wrote:
Hi Alexander,
Thanks for pointing that out!
But this issue is fixed now you can see below link for bz-link and patch.
BZ - https://bugzilla.redhat.com/show_bug.cgi?id=1709248
Patch - https://review.gluster.org/#/c/glusterfs/+/22716/
Hope this helps.
/sunny
On Fri, Aug 30, 2019 at 2:30 AM Alexander Iliev
<ailiev+gluster@xxxxxxxxx> wrote:
Hello dear GlusterFS users list,
I have been trying to set up geo-replication between two clusters for
some time now. The desired state is (Cluster #1) being replicated to
(Cluster #2).
Here are some details about the setup:
Cluster #1: three nodes connected via a local network (172.31.35.0/24),
one replicated (3 replica) volume.
Cluster #2: three nodes connected via a local network (172.31.36.0/24),
one replicated (3 replica) volume.
The two clusters are connected to the Internet via separate network
adapters.
Only SSH (port 22) is open on cluster #2 nodes' adapters connected to
the Internet.
All nodes are running Ubuntu 18.04 and GlusterFS 6.3 installed from [1].
The first time I followed the guide[2] everything went fine up until I
reached the "Create the session" step. That was like a month ago, then I
had to temporarily stop working in this and now I am coming back to it.
Currently, if I try to see the mountbroker status I get the following:
# gluster-mountbroker status
Traceback (most recent call last):
File "/usr/sbin/gluster-mountbroker", line 396, in <module>
runcli()
File "/usr/lib/python3/dist-packages/gluster/cliutils/cliutils.py", line 225, in runcli
cls.run(args)
File "/usr/sbin/gluster-mountbroker", line 275, in run
out = execute_in_peers("node-status")
File "/usr/lib/python3/dist-packages/gluster/cliutils/cliutils.py",
line 127, in execute_in_peers
raise GlusterCmdException((rc, out, err, " ".join(cmd)))
gluster.cliutils.cliutils.GlusterCmdException: (1, '', 'Unable to
end. Error : Success\n', 'gluster system:: execute mountbroker.py
node-status')
And in /var/log/gluster/glusterd.log I have:
[2019-08-10 15:24:21.418834] E [MSGID: 106336]
[glusterd-geo-rep.c:5413:glusterd_op_sys_exec] 0-management: Unable to
end. Error : Success
[2019-08-10 15:24:21.418908] E [MSGID: 106122]
[glusterd-syncop.c:1445:gd_commit_op_phase] 0-management: Commit of
operation 'Volume Execute system commands' failed on localhost : Unable
to end. Error : Success
So, I have two questions right now:
1) Is there anything wrong with my setup (networking, open ports, etc.)?
Is it expected to work with this setup or should I redo it in a
different way?
2) How can I troubleshoot the current status of my setup? Can I find out
what's missing/wrong and continue from there or should I just start from
scratch?
Links:
[1] http://ppa.launchpad.net/gluster/glusterfs-6/ubuntu
[2]
https://docs.gluster.org/en/latest/Administrator%20Guide/Geo%20Replication/
Thank you!
Best regards,
--
alexander iliev
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users