Argh!! How many more problems is my change gonna cause. :\ But I didn't hit this particular problem when testing on my local VMs. Could the issues we are facing here probably related to the environment on which VMs are running? ~kaushal On Tue, Apr 28, 2015 at 12:15 PM, Atin Mukherjee <amukherj@xxxxxxxxxx> wrote: > I see netbsd regression doesn't execute peer probe from any other tests > apart from mgmt_v3-locks.t, if it had that would have also failed. So > the conclusion is peer probe doesn't work in netbsd. Glusterd crashes > with following bt when peer probe is executed: > > #0 __uatomic_add_return (len=8, val=1, addr=<optimized out>) at > /usr/pkg/include/urcu/uatomic.h:233 > 233 __asm__ __volatile__("ud2"); > (gdb) bt > #0 __uatomic_add_return (len=8, val=1, addr=<optimized out>) at > /usr/pkg/include/urcu/uatomic.h:233 > #1 glusterd_peerinfo_new (state=state@entry=GD_FRIEND_STATE_DEFAULT, > uuid=uuid@entry=0x0, > hostname=<optimized out>, hostname@entry=0xb8b040e0 "127.1.1.2", > port=port@entry=24007) > at glusterd-peer-utils.c:308 > #2 0xb91e0068 in glusterd_friend_add (hoststr=hoststr@entry=0xb8b040e0 > "127.1.1.2", port=port@entry=24007, > state=state@entry=GD_FRIEND_STATE_DEFAULT, uuid=uuid@entry=0x0, > friend=friend@entry=0xb89fff30, > restore=restore@entry=_gf_false, args=args@entry=0xb89fff38) at > glusterd-handler.c:3212 > #3 0xb91e2927 in glusterd_probe_begin (req=req@entry=0xb8f40040, > hoststr=0xb8b040e0 "127.1.1.2", port=24007, > dict=0xb9c013b0, op_errno=op_errno@entry=0xb89fff9c) at > glusterd-handler.c:3320 > #4 0xb91e2de2 in __glusterd_handle_cli_probe (req=0xb8f40040) at > glusterd-handler.c:1078 > #5 0xb91dc932 in glusterd_big_locked_handler (req=req@entry=0xb8f40040, > actor_fn=actor_fn@entry=0xb91e294d <__glusterd_handle_cli_probe>) at > glusterd-handler.c:83 > #6 0xb91dc9e8 in glusterd_handle_cli_probe (req=0xb8f40040) at > glusterd-handler.c:1105 > #7 0xbb787c82 in synctask_wrap (old_task=0xb8f66000) at syncop.c:375 > #8 0xbb39c630 in ?? () from /usr/lib/libc.so.12 > > http://review.gluster.org/#/c/10147/ is the cause for it. I will > continue to investigate on this, however I am not able to understand > what this line __asm__ __volatile__("ud2") indicating. Any experts :) ? > > ~Atin > _______________________________________________ > Gluster-devel mailing list > Gluster-devel@xxxxxxxxxxx > http://www.gluster.org/mailman/listinfo/gluster-devel _______________________________________________ Gluster-devel mailing list Gluster-devel@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-devel