Re: Geo-replication (v3.5.3)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Paul,

That was certainly not made clear by the documentation, what there is of it. I've done as you suggest and it's working now. Thank you.

regards,
John


On 16/03/15 09:22, Paul Mc Auley wrote:
One thing I've noticed is that you need to make sure that the SSH host
keys of _each_ of the slave bricks needs to be in the known_hosts of
each of the master bricks. Failure to ensure this can cause failure in
a non-obvious way.

Regards,
Paul

On 12 March 2015 at 20:29, John Gardeniers
<jgardeniers@xxxxxxxxxxxxxxxxx> wrote:
Just to make it clear, I *have* set up passwordless SSH between the node
where I'm running the command and the slave. I thought that should have been
obvious from my message. Also, the identity files are in the standard
location. So, back to the question I asked, What gives? More to the point,
how do I make this work?



On 11/03/15 18:24, M S Vishwanath Bhat wrote:



On 11 March 2015 at 06:30, John Gardeniers <jgardeniers@xxxxxxxxxxxxxxxxx>
wrote:
Using Gluster v3.5.3 and trying to follow the geo-replication instructions
(https://github.com/gluster/glusterfs/blob/master/doc/admin-guide/en-US/markdown/admin_distributed_geo_rep.md),
step by step, gets me nowhere.

The slave volume has been created and passwordless SSH is set up for root
from the master to slave. Both master and slave volumes are running.

Running "gluster system:: execute gsec_create", no problem.
Running "gluster volume geo-replication <master_volume>
<slave_host>::<slave_volume> create push-pem [force]" (with appropriate
parameters, with and without "force") results in "Passwordless ssh login has
not been setup with <slave_server>. geo-replication command failed"

As I said, passwordless SSH *is* set up. I can SSH from the master to the
slave without a password just fine. What gives? More to the point, how do I
make this work.

Just to make it clear, password less ssh needs to be setup between the
master node where you run the "geo-rep-create" command to slave node
specified in the "geo-rep-create" command.

And if you have identity file saved in non-standard location, geo-rep had a
bug for it https://bugzilla.redhat.com/show_bug.cgi?id=1181117

The patch is set for it and should be available in next release of
glusterfs.


regards,
John


_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users


______________________________________________________________________
This email has been scanned by the Symantec Email Security.cloud service.
For more information please visit http://www.symanteccloud.com
______________________________________________________________________



_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users



_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users




[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux