Gordan, can you please attach the backtrace from gdb and the logs of the server during the coredump? avati On 06/05/2008, gordan@xxxxxxxxxx <gordan@xxxxxxxxxx> wrote: > > I just did a bit more digging, and it seems related to the posix-lock > feature. If I remove that from the volume stacks, everything works fine. > > The machine on which glusterfsd core dumps is the secondary server (as per > the afr component list ordering) and is x86-64. The primary (IA32) machine > continues fine without the core dump. > > Configs for both sides are attached. Have I made a mistake in the configs? > > Gordan > > On Tue, 6 May 2008, gordan@xxxxxxxxxx wrote: > > Hi, > > > > I've just observed what seems like a problem related to remote startind > > glusterfsd over ssh. If I ssh into one of my glusterfs servers, su to root > > and start glusterfsd, it starts fine and everything works. However, as soon > > as I log out, glusterfsd seems to die and core dump. > > > > If I do it with nohup it doesn't seem to happen, so I'm guessing it's > > the session reset that causes the problem. I'm guessing this isn't the > > expected behaviour. An uncaught signal (SIGHUP?) somewhere, perhaps? > > > > This only seems to have started happening since I added the posix lock > > brick and re-ordered the storage volume bricks so they are listed in the > > same order on all servers. > > > > Gordan > > > > > > _______________________________________________ > > Gluster-devel mailing list > > Gluster-devel@xxxxxxxxxx > > http://lists.nongnu.org/mailman/listinfo/gluster-devel > > > > _______________________________________________ > Gluster-devel mailing list > Gluster-devel@xxxxxxxxxx > http://lists.nongnu.org/mailman/listinfo/gluster-devel > > > -- If I traveled to the end of the rainbow As Dame Fortune did intend, Murphy would be there to tell me The pot's at the other end.