Re: Segfault in glusterfsd

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Avati,

 Here it is server spec file you asked:
----------
volume disk
 type storage/posix              # POSIX FS translator
 option directory /mnt/hd        # Export this directory
end-volume

volume locks
 type features/posix-locks
 subvolumes disk
end-volume

volume brick    #iothreads can give performance a boost
 type performance/io-threads
 option thread-count 8
 subvolumes locks
end-volume

### Add network serving capability to above brick
volume server
 type protocol/server
 option transport-type tcp/server     # For TCP/IP transport
# option bind-address 192.168.1.10     # Default is to listen on all interfaces
# option listen-port 6996              # Default is 6996
 option client-volume-filename /etc/glusterfs/client.vol
 subvolumes brick
 option auth.ip.brick.allow 10.1.0.*  # Allow access to "brick" volume
end-volume
---------

As for core dump, gdb says that:
---
Core was generated by `[glusterfsd]
                             '.
Program terminated with signal 11, Segmentation fault.
#0  dict_destroy (this=0x2aaab0000f70) at dict.c:251
251     dict.c: No such file or directory.
       in dict.c
(gdb) p *prev
Cannot access memory at address 0x4449475f52454c
---

WBR,
 Andrey

2007/6/15, Anand Avati <avati@xxxxxxxxxxxxx>:
Andrey,
  can you also send along the server side spec file? if you still have the
core is it possible to get the output of 'p *prev' from gdb?




[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux