I'm currently running the test in a loop on slave0. I've not had any failures yet. I'm running on commit d1ff9dead (glusterd: Fix conf->generation to stop new peers participating in a transaction, while the transaction is in progress.) , Avra's fix which was merged yesterday on master. I did a small change to log the peerinfo objects addresses in __glusterd_peer_rpc_notify as before. What I'm observing is that the change in memory address is due to glusterd being restarted during the test. So we can rule out any duplication of peerinfos leading to problems that were observed. I'll keep running the test in a loop to see if I can hit any failures. If I get a failure I'll debug it. ~kaushal On Wed, May 27, 2015 at 9:54 AM, Atin Mukherjee <amukherj@xxxxxxxxxx> wrote: > > > On 05/27/2015 08:55 AM, Krishnan Parthasarathi wrote: >>> I will encourage to do it before this patch gets into the codebase. >> >> The duplicate peerinfo object issue is different from the problems >> in the current generation number scheme. I don't see why this patch >> needs to wait. We may need to keep volume-snapshot-clone.t disabled >> until the time the duplicate peerinfo issue is resolved, or understood >> better. Does that make sense? > Running snapshot-clone with out this patch encounters failure (once in > two runs), that's what we observed in the last exercise. I know this > issue has nothing to do with the problem we are addressing with this > patch, however snapshot-clone *will not* fail even if we encounter > duplicate peerinfo object issue if we apply this patch. Hope that > clarifies my point of asking to test snapshot-clone.t before this patch > gets in. >> _______________________________________________ >> Gluster-devel mailing list >> Gluster-devel@xxxxxxxxxxx >> http://www.gluster.org/mailman/listinfo/gluster-devel >> > > -- > ~Atin > _______________________________________________ > Gluster-devel mailing list > Gluster-devel@xxxxxxxxxxx > http://www.gluster.org/mailman/listinfo/gluster-devel _______________________________________________ Gluster-devel mailing list Gluster-devel@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-devel