Re: Problem with new glusterfs installation...

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Here is the backtrace given for the core dump:

Core was generated by `glusterfs --no-daemon -f /etc/glusterfs/glusterfs-
client.vol -l /var/log/gluste'.
Program terminated with signal 11, Segmentation fault.

warning: Can't read pathname for load map: Input/output error.
Reading symbols from /usr/lib/libglusterfs.so.0...done.
Loaded symbols for /usr/lib/libglusterfs.so.0
Reading symbols from /usr/lib/libfuse.so.2...done.
Loaded symbols for /usr/lib/libfuse.so.2
Reading symbols from /lib/librt.so.1...done.
Loaded symbols for /lib/librt.so.1
Reading symbols from /lib/libdl.so.2...done.
Loaded symbols for /lib/libdl.so.2
Reading symbols from /lib/libpthread.so.0...done.
Loaded symbols for /lib/libpthread.so.0
Reading symbols from /lib/libc.so.6...done.
Loaded symbols for /lib/libc.so.6
Reading symbols from /lib/ld-linux.so.2...done.
Loaded symbols for /lib/ld-linux.so.2
Reading symbols from
/usr/lib/glusterfs/1.3.0-pre4/xlator/protocol/client.so...done.
Loaded symbols for /usr/lib/glusterfs/1.3.0-pre4/xlator/protocol/client.so
Reading symbols from
/usr/lib/glusterfs/1.3.0-pre4/xlator/cluster/unify.so...done.
Loaded symbols for /usr/lib/glusterfs/1.3.0-pre4/xlator/cluster/unify.so
Reading symbols from /usr/lib/glusterfs/1.3.0-pre4/scheduler/rr.so...done.
Loaded symbols for /usr/lib/glusterfs/1.3.0-pre4/scheduler/rr.so
Reading symbols from
/usr/lib/glusterfs/1.3.0-pre4/transport/tcp/client.so...done.
Loaded symbols for /usr/lib/glusterfs/1.3.0-pre4/transport/tcp/client.so
Reading symbols from
/usr/lib/gcc/i686-pc-linux-gnu/4.1.1/libgcc_s.so.1...done.
Loaded symbols for /usr/lib/gcc/i686-pc-linux-gnu/4.1.1/libgcc_s.so.1
#0  0xb7fb4410 in ?? ()
(gdb) bt
#0  0xb7fb4410 in ?? ()
#1  0xbfb226a8 in ?? ()
#2  0x0000000b in ?? ()
#3  0x00006c08 in ?? ()
#4  0xb7f762cd in raise () from /lib/libpthread.so.0
#5  0xb7fa97be in gf_print_trace (signum=6) at common-utils.c:221
#6  0xb7fb4420 in ?? ()
#7  0x00000006 in ?? ()
#8  0x00000033 in ?? ()
#9  0x00000000 in ?? ()


Any help is much appreciated...thanks.

-Jon

On 5/30/07, Anand Avati <avati@xxxxxxxxxxxxx> wrote:

Jonathan,
  it looks like the glusterfs client has exited or segfaulted. is it
possible for you to get a backtrace from the core? (if it is not
generating a core run 'ulimit -c unlimited' and then start glusterfs
with -N (non daemon mode) and re-do the steps to generate the error).
  that apart, please try the 1.3-pre4 release and see if you still get
the error. 1.2.3 is pretty old and a lot of things have happened
since.

thanks,
avati

2007/5/29, Jonathan Newman <jbnewm@xxxxxxxxx>:
> Hey guys, I am relatively new to glusterfs and am having a bit of
difficulty
> getting a clustered fs up and running using it. Here are the details:
> GlusterFS package: 1.2.3
>
> 3 servers total, 2 running glusterfsd and 1 as client to mount clustered
fs.
> The glusterfsd-server.vol on the two servers are identical and contain:
> ### File: /etc/glusterfs-server.vol - GlusterFS Server Volume
Specification
>
> ### Export volume "brick" with the contents of "/data" directory.
> volume brick
>   type storage/posix                   # POSIX FS translator
>   option directory /data               # Export this directory
> end-volume
>
> ### Add network serving capability to above brick.
> volume server
>   type protocol/server
>   option transport-type tcp/server     # For TCP/IP transport
>   option client-volume-filename /etc/glusterfs/glusterfs-client.vol
>   subvolumes brick
>   option auth.ip.brick.allow 10.* # Allow access to "brick" volume
> end-volume
>
> The client file contains this:
> ### File: /etc/glusterfs/glusterfs-client.vol - GlusterFS Client Volume
> Specification
>
> ### Add client feature and attach to remote subvolume of server1
> volume client1
>   type protocol/client
>   option transport-type tcp/client     # for TCP/IP transport
>   option remote-host 10.20.70.1        # IP address of the remote brick
>   option remote-subvolume brick         # name of the remote volume
> end-volume
>
> ### Add client feature and attach to remote subvolume of server2
> volume client2
>   type protocol/client
>   option transport-type tcp/client     # for TCP/IP transport
>   option remote-host 10.20.70.2        # IP address of the remote brick
>   option remote-subvolume brick         # name of the remote volume
> end-volume
>
> ### Add unify feature to cluster "server1" and "server2". Associate an
> ### appropriate scheduler that matches your I/O demand.
> volume brick
>   type cluster/unify
>   subvolumes client1 client2
>   ### ** Round Robin (RR) Scheduler **
>   option scheduler rr
>   option rr.limits.min-free-disk 4GB          # Units in KB, MB and GB
are
> allowed
>   option rr.refresh-interval 10               # Check server brick after
> 10s.
> end-volume
>
> Server daemons on both servers are started using:
> /usr/sbin/glusterfsd --log-file=/var/log/glusterfs/glusterfs.log
>
> And then I mount the file system on the client using this command:
> /usr/sbin/glusterfs -f
> /etc/glusterfs/glusterfs-
client.vol--log-file=/var/log/glusterfs/glusterfs.log
> /mnt/test
>
> All appears well and running mount on the client produces (among other
> items):
> glusterfs:17983 on /mnt/test type fuse
(rw,allow_other,default_permissions)
>
> However the logs on the servers show (both show same output in logs):
> Tue May 29 11:56:29 2007 [DEBUG] tcp/server: Registering socket (4) for
new
> transport object of 10.20.30.1
> Tue May 29 11:56:29 2007 [DEBUG] server-protocol: mop_setvolume:
received
> port = 1020
> Tue May 29 11:56:29 2007 [DEBUG] server-protocol: mop_setvolume: IP addr
=
> 10.*, received ip addr = 10.20.30.1
> Tue May 29 11:56:29 2007 [DEBUG] server-protocol: mop_setvolume:
accepted
> client from 10.20.30.1
> Tue May 29 11:56:29 2007 [DEBUG] libglusterfs: full_rw: 0 bytes r/w
instead
> of 113
> Tue May 29 11:56:29 2007 [DEBUG] libglusterfs: full_rw:  Ñ÷·Ág, error
string
> 'File exists'
> Tue May 29 11:56:29 2007 [DEBUG] libglusterfs/protocol:
> gf_block_unserialize_transport: full_read of header failed
> Tue May 29 11:56:29 2007 [DEBUG] protocol/server: cleaned up xl_private
of
> 0x8050178
> Tue May 29 11:56:29 2007 [DEBUG] tcp/server: destroying transport object
for
> 10.20.30.1:1020 (fd=4)
>
> AND running any sort of file operation from within /mnt/test yields:
> ~ # cd /mnt/test; ls
> ls: .: Transport endpoint is not connected
>
> 10.20.30.1 is the client and 10.20.70.[1,2] are the servers.
>
> Anyone have any pointers that may lead me in the correct direction?
>
> Thanks.
>
> -Jon
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel@xxxxxxxxxx
> http://lists.nongnu.org/mailman/listinfo/gluster-devel
>


--
Anand V. Avati



[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux