Re: [Regression-failure] glusterd status

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




----- Original Message -----
> I'm currently running the test in a loop on slave0. I've not had any
> failures yet.
> I'm running on commit d1ff9dead (glusterd: Fix conf->generation to
> stop new peers participating in a transaction, while the transaction
> is in progress.) , Avra's fix which was merged yesterday on master.
> 
> I did a small change to log the peerinfo objects addresses in
> __glusterd_peer_rpc_notify as before. What I'm observing is that the
> change in memory address is due to glusterd being restarted during the
> test. So we can rule out any duplication of peerinfos leading to
> problems that were observed.

Great news! We can revisit this if we see duplicate peerinfos in a single
glusterd 'session'. For now this is a non-issue.

> 
> I'll keep running the test in a loop to see if I can hit any failures.
> If I get a failure I'll debug it.

I'd leave this to you.
_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-devel




[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux