These are my findings regarding the issue KP mentioned with volume-snapshot-clone.t. >From what I gathered, the double peerinfo objects issue was observed only via logs. Additional logging was added to glusterd_peer_rpc_notify to log the objects address and peerinfo->hostname . I did the same and observed the same in my logs. But what I found was that glusterd was being restarted in between the logs with different address. So the peerinfo objects were bound to have different addresses. Avra, can you confirm if the above is correct? In the case it is, then I can help debug the issue being faced. I've been running the test on a local vm but, I've not gotten it to fail, even without Avra's partial fix. I spoke to Atin regarding this, and he informed me that the failures are only hit on the rackspace slave VMs. I'll try get access to a slave VM and check the issue out. ~kaushal On Tue, May 26, 2015 at 3:37 PM, Krishnan Parthasarathi <kparthas@xxxxxxxxxx> wrote: > All, > > The following are the regression test failures that are being looked > at by Atin, Avra, Kaushal and self. > > 1) ./tests/bugs/glusterd/bug-974007.t to be fixed by http://review.gluster.org/10872 > > 2) ./tests/basic/volume-snapshot-clone. to be fixed (partially) by http://review.gluster.org/10895 > There is another issue ('other' part) where there are 2 peerinfo objects for a given peer/node. This > is being investigated by Kaushal. > > Will keep this thread updated as and when more progress is made. > > cheers, > KP > _______________________________________________ > Gluster-devel mailing list > Gluster-devel@xxxxxxxxxxx > http://www.gluster.org/mailman/listinfo/gluster-devel _______________________________________________ Gluster-devel mailing list Gluster-devel@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-devel