We debugged this privately. This was an issue caused by our not correctly saving of global options (options which apply to all volumes), such as nfs options, for all volumes. We were saving these options to disk only for the volume for which the 'volume set' was performed, but we would modify all the volumes in-memory. This lead to problems of volumes mismatch and peer rejection, such the problem faced by Nux. I've posted a patch for review [1] which solves this. ~kaushal [1]http://review.gluster.org/6007 On Thu, Sep 26, 2013 at 7:29 PM, Lalatendu Mohanty <lmohanty at redhat.com> wrote: > On 09/25/2013 08:12 PM, Nux! wrote: >> >> Hello, >> >> I'm currently using 4 servers with various types of volumes on them, today >> I noticed 2 of them report the other 2 as "peer rejected" and viceversa. >> Where do I even begin to debug this? I don't see anything meaningful in >> the logs. >> Any pointers? >> > If you are running different versions of gluster in these nodes , this is a > possibility or else might be a bug. You can try detaching the peers with > command "gluster peer detach <HOSTNAME> " and then try to do peer probe > these nodes again. > > Which version of gluster you are running and on which distribution? > > Thanks, > Lala > > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://supercolony.gluster.org/mailman/listinfo/gluster-users