Re: Setting up geo replication with GlusterFS 3.6.5

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I will try to modify that tool to work with 3.6.x versions. In 3.7 we added no-verify option in Geo-rep create command. This tool is failing in this stage.

"Unable to fetch slave volume details. Please check the slave cluster and slave volume.
geo-replication command failed "

This looks like IPtables/SELinux issue. Geo-rep create command verifies the Slave Volume by mounting in locally.

regards
Aravinda

On 09/15/2015 12:46 PM, Saravanakumar Arumugam wrote:
Hi,
You are right,   This tool may not be compatible with 3.6.5.

I have tried myself with 3.6.5, but faced this error.
==========================
georepsetup tv1 gfvm3 tv2
Geo-replication session will be established between tv1 and gfvm3::tv2
Root password of gfvm3 is required to complete the setup. NOTE: Password will not be stored.

root@gfvm3's password:
[    OK] gfvm3 is Reachable(Port 22)
[    OK] SSH Connection established root@gfvm3
[    OK] Master Volume and Slave Volume are compatible (Version: 3.6.5)
[ OK] Common secret pub file present at /var/lib/glusterd/geo-replication/common_secret.pem.pub
[    OK] common_secret.pem.pub file copied to gfvm3
[    OK] Master SSH Keys copied to all Up Slave nodes
[ OK] Updated Master SSH Keys to all Up Slave nodes authorized_keys file
[NOT OK] Failed to Establish Geo-replication Session
Command type not found while handling geo-replication options
[root@gfvm3 georepsetup]#
==========================
So, some more changes required in this tool.


Coming back to your question:

I have setup geo-replication using the commands in 3.6.5.
Please recheck all the commands (with necessary changes at your end).

====================================================================
[root@gfvm3 georepsetup]#
[root@gfvm3 georepsetup]# cat /etc/redhat-release
Fedora release 21 (Twenty One)
[root@gfvm3 georepsetup]#
[root@gfvm3 georepsetup]# rpm -qa | grep glusterfs
glusterfs-devel-3.6.5-1.fc21.x86_64
glusterfs-3.6.5-1.fc21.x86_64
glusterfs-rdma-3.6.5-1.fc21.x86_64
glusterfs-fuse-3.6.5-1.fc21.x86_64
glusterfs-server-3.6.5-1.fc21.x86_64
glusterfs-debuginfo-3.6.5-1.fc21.x86_64
glusterfs-libs-3.6.5-1.fc21.x86_64
glusterfs-extra-xlators-3.6.5-1.fc21.x86_64
glusterfs-geo-replication-3.6.5-1.fc21.x86_64
glusterfs-api-3.6.5-1.fc21.x86_64
glusterfs-api-devel-3.6.5-1.fc21.x86_64
glusterfs-cli-3.6.5-1.fc21.x86_64
[root@gfvm3 georepsetup]#
[root@gfvm3 georepsetup]#
[root@gfvm3 georepsetup]# service glusterd start
Redirecting to /bin/systemctl start  glusterd.service
[root@gfvm3 georepsetup]#
[root@gfvm3 georepsetup]#
[root@gfvm3 georepsetup]# service glusterd status
Redirecting to /bin/systemctl status  glusterd.service
● glusterd.service - GlusterFS, a clustered file-system server
   Loaded: loaded (/usr/lib/systemd/system/glusterd.service; disabled)
   Active: active (running) since Tue 2015-09-15 12:19:32 IST; 4s ago
Process: 2778 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid (code=exited, status=0/SUCCESS)
 Main PID: 2779 (glusterd)
   CGroup: /system.slice/glusterd.service
           └─2779 /usr/sbin/glusterd -p /var/run/glusterd.pid
[root@gfvm3 georepsetup]#
[root@gfvm3 georepsetup]#
[root@gfvm3 georepsetup]# ps aux | grep glus
root 2779 0.0 0.4 448208 17288 ? Ssl 12:19 0:00 /usr/sbin/glusterd -p /var/run/glusterd.pid
[root@gfvm3 georepsetup]#
[root@gfvm3 georepsetup]#
[root@gfvm3 georepsetup]# gluster volume create tv1 gfvm3:/opt/volume_test/tv_1/b1 gfvm3:/opt/volume_test/tv_1/b2 force
volume create: tv1: success: please start the volume to access data
[root@gfvm3 georepsetup]#
[root@gfvm3 georepsetup]#
[root@gfvm3 georepsetup]# gluster volume create tv2 gfvm3:/opt/volume_test/tv_2/b1 gfvm3:/opt/volume_test/tv_2/b2 force
volume create: tv2: success: please start the volume to access data
[root@gfvm3 georepsetup]#
[root@gfvm3 georepsetup]#
[root@gfvm3 georepsetup]#  gluster volume start tv1
volume start: tv1: success
[root@gfvm3 georepsetup]#
[root@gfvm3 georepsetup]#
[root@gfvm3 georepsetup]# gluster volume start tv2
volume start: tv2: success
[root@gfvm3 georepsetup]#
[root@gfvm3 georepsetup]#
[root@gfvm3 georepsetup]# mount -t glusterfs gfvm3:/tv1 /mnt/master/
[root@gfvm3 georepsetup]# mount -t glusterfs gfvm3:/tv2 /mnt/slave/
[root@gfvm3 georepsetup]#
[root@gfvm3 georepsetup]#
[root@gfvm3 georepsetup]# gluster system:: execute gsec_create
Common secret pub file present at /var/lib/glusterd/geo-replication/common_secret.pem.pub
[root@gfvm3 georepsetup]#
[root@gfvm3 georepsetup]#
[root@gfvm3 georepsetup]# gluster volume geo-replication tv1 gfvm3::tv2 create push-pem Creating geo-replication session between tv1 & gfvm3::tv2 has been successful
[root@gfvm3 georepsetup]#
[root@gfvm3 georepsetup]#
[root@gfvm3 georepsetup]# gluster volume geo-replication tv1 gfvm3::tv2 start Starting geo-replication session between tv1 & gfvm3::tv2 has been successful
[root@gfvm3 georepsetup]#
[root@gfvm3 georepsetup]#
[root@gfvm3 georepsetup]# gluster volume geo-replication tv1 gfvm3::tv2 status

MASTER NODE MASTER VOL MASTER BRICK SLAVE STATUS CHECKPOINT STATUS CRAWL STATUS ---------------------------------------------------------------------------------------------------------------------------- gfvm3 tv1 /opt/volume_test/tv_1/b1 gfvm3::tv2 Initializing... N/A N/A gfvm3 tv1 /opt/volume_test/tv_1/b2 gfvm3::tv2 Initializing... N/A N/A
[root@gfvm3 georepsetup]#
[root@gfvm3 georepsetup]#
[root@gfvm3 georepsetup]#
[root@gfvm3 georepsetup]#
[root@gfvm3 georepsetup]# gluster volume geo-replication tv1 gfvm3::tv2 status

MASTER NODE MASTER VOL MASTER BRICK SLAVE STATUS CHECKPOINT STATUS CRAWL STATUS ---------------------------------------------------------------------------------------------------------------------- gfvm3 tv1 /opt/volume_test/tv_1/b1 gfvm3::tv2 Active N/A Changelog Crawl gfvm3 tv1 /opt/volume_test/tv_1/b2 gfvm3::tv2 Active N/A Changelog Crawl
[root@gfvm3 georepsetup]#
[root@gfvm3 georepsetup]# cp /etc/hosts
hosts        hosts.allow  hosts.deny
[root@gfvm3 georepsetup]# cp /etc/hosts* /mnt/master; sleep 20; ls /mnt/slave/
hosts  hosts.allow  hosts.deny
[root@gfvm3 georepsetup]# ls /mnt/master
hosts  hosts.allow  hosts.deny
[root@gfvm3 georepsetup]# ls /mnt/slave/
hosts  hosts.allow  hosts.deny
[root@gfvm3 georepsetup]#
====================================================================
One important step which I have NOT mentioned is you need to setup passwordless ssh.

You need to use ssh-keygen and ssh-copy-id for having passwordless setup from one node in Master to one node in Slave. This needs to be carried out before this step: " gluster system:: execute gsec_create" This needs to be done at the same NODE(at Master) where you execute geo-rep create command.

You can find geo-replication related logs here :/var/log/glusterfs/geo-replication/
Please share the logs if you still face any issues.

Thanks,
Saravana


On 09/14/2015 11:23 PM, ML mail wrote:
Yes I can ping the slave node with it's name and IP address, I've even entered manually its name in /etc/hosts.

Does this nice python script also work for Gluster 3.6? The blog post only speaks about 3.7...

Regards
ML




On Monday, September 14, 2015 9:38 AM, Saravanakumar Arumugam <sarumuga@xxxxxxxxxx> wrote:
Hi,

<< Unable to fetch slave volume details. Please check the slave cluster
and slave volume. geo-replication command failed
Have you checked whether you are able to reach the Slave node from the
master node?

There is a super simple way of setting up geo-rep written by Aravinda.
Refer:
http://blog.gluster.org/2015/09/introducing-georepsetup-gluster-geo-replication-setup-tool-2/

Refer the README for both usual (root user based) and
mountbroker(non-root) setup details here:
https://github.com/aravindavk/georepsetup/blob/master/README.md

Thanks,
Saravana



On 09/13/2015 09:46 PM, ML mail wrote:
Hello,

I am using the following documentation in order to setup geo replication between two sites http://www.gluster.org/pipermail/gluster-users.old/2015-January/020080.html

Unfortunately the step:

gluster volume geo-replication myvolume gfsgeo@xxxxxxxxxxxxxxxxxx::myvolume create push-pem

Fails with the following error:

Unable to fetch slave volume details. Please check the slave cluster and slave volume.
geo-replication command failed

Any ideas?

btw: the documentation
http://gluster.readthedocs.org/en/latest/Administrator%20Guide/Geo%20Replication/index.html does not seem to work with GlusterFS 3.6.5 that's why I am using the other mentioned documentation. It fails at the mountbroker step ( gluster system:: execute mountbroker opt mountbroker-root /var/mountbroker-root)

Regards
ML
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users




[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux