Re: gluster client crash

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Karl,
 can  you get a backtrace from the coredump with gdb please? that would help
a lot.

avati

2007/12/17, Karl Bernard <karl@xxxxxxxxx>:
>
>
> The client crashed, if this can be helpful:
>
> 2007-12-15 17:03:59 W [client-protocol.c:289:client_protocol_xfer]
> sxx01: attempting to pipeline request type(0) op(34) with handshake
>
> ---------
> got signal (11), printing backtrace
> ---------
> [0xcc5420]
> /usr/local/lib/glusterfs/1.3.8/xlator/performance/write-behind.so
> [0x196c1d]
> /usr/local/lib/glusterfs/1.3.8/xlator/performance/io-threads.so[0x262dab]
>
> /usr/local/lib/glusterfs/1.3.8/xlator/cluster/afr.so(afr_close_cbk+0x1d6)[0x118b46]
> /usr/local/lib/glusterfs/1.3.8/xlator/protocol/client.so[0x13367d]
>
> /usr/local/lib/glusterfs/1.3.8/xlator/protocol/client.so(notify+0xa84)[0x1374c4]
> /usr/local/lib/libglusterfs.so.0(transport_notify+0x37)[0x6c9717]
> /usr/local/lib/libglusterfs.so.0(sys_epoll_iteration+0xf3)[0x6ca473]
> /usr/local/lib/libglusterfs.so.0(poll_iteration+0x7c)[0x6c984c]
> [glusterfs](main+0x424)[0x804a494]
> /lib/libc.so.6(__libc_start_main+0xdc)[0xa49dec]
> [glusterfs][0x8049fe1]
>
>
> glusterfs 1.3.8
> installed from tla, last patch:
> 2007-12-03 22:29:15 GMT Anand V. Avati <avati@xxxxxxxxx>        patch-594
>
> Config client:
> ----------------------------------------------------------
> volume sxx01
> type protocol/client
> option transport-type tcp/client
> option remote-host sxx01b
> option remote-subvolume brick
> end-volume
>
> volume sxx02
> type protocol/client
> option transport-type tcp/client
> option remote-host sxx02b
> option remote-subvolume brick
> end-volume
>
> volume afr1-2
>   type cluster/afr
>   subvolumes sxx01 sxx02
> end-volume
>
> volume iot
> type performance/io-threads
> subvolumes afr1-2
> option thread-count 8
> end-volume
>
> ## Add writebehind feature
> volume writebehind
>   type performance/write-behind
>   option aggregate-size 128kB
>   subvolumes iot
> end-volume
>
> ## Add readahead feature
> volume readahead
>   type performance/read-ahead
>   option page-size 256kB     #
>   option page-count 16       # cache per file  = (page-count x page-size)
>   subvolumes writebehind
> end-volume
>
> ------------------------------------------------------
>
> Config Server:
> volume brick-posix
>         type storage/posix
>         option directory /data/glusterfs/dataspace
> end-volume
>
> volume brick-ns
>         type storage/posix
>         option directory /data/glusterfs/namespace
> end-volume
>
> volume brick
>   type performance/io-threads
>   option thread-count 2
>   option cache-size 32MB
>   subvolumes brick-posix
> end-volume
>
> volume server
>         type protocol/server
>         option transport-type tcp/server
>         subvolumes brick brick-ns
>         option auth.ip.brick.allow 172.16.93.*
>         option auth.ip.brick-ns.allow 172.16.93.*
> end-volume
>
> ------------------------------------
>
> The client was most likely  checking for the existence of a file or
> writing a new file to the servers.
>



-- 
If I traveled to the end of the rainbow
As Dame Fortune did intend,
Murphy would be there to tell me
The pot's at the other end.


[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux