Hi,
even after stopping/starting the volume no change.
But that problem seem the #1103413 bug.
Using the workaround suggested by Jae Park fixed it and now is
working.
Thanks everybody for the support.
Ivano
On 6/13/14 2:31 PM, Ivano Talamo wrote:
On 6/13/14 1:08 PM, Niels de Vos wrote:
On Fri, Jun 13, 2014 at 09:56:46AM +0200,
Ivano Talamo wrote:
I had done it only on one of the two :(
But even after I've done it on the other server and I've
restarted
the gluster daemon I see no change
Can you confirm that the issue is still occurring after you
stopped and
restarted the volume? A restart of the volume is needed to
activate the
server.allow-insecure volume option. When that option is not
active, the
glusterfs-client (libgfapi) will not be able to connect to the
bricks.
The bricks will detect that the client uses a port > 1024 an
does not
allow access.
With a stop and start of the volume, the .vol files that are
used by the
brick processes get regenerated. With this regeneration, the
server.allow-insecure option gets activated correctly. You only
have to
execute these commands on one storage server:
# gluster volume stop <volname>
# gluster volume start <volname>
This is one of the most common issues when using libgfapi. If
this
indeed was not done in your environment, we may need to explain
it
better, or make it more obvious in the documentation.
I restarted only the glusterd service, not the volume.
At the moment I cannot stop the volume since is mounted and in
production.
I will do it after 17 UTC or tomorrow at most and will update you
then.
Thanks,
Ivano
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://supercolony.gluster.org/mailman/listinfo/gluster-users
|
Attachment:
smime.p7s
Description: S/MIME Cryptographic Signature
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://supercolony.gluster.org/mailman/listinfo/gluster-users