Re: Ceph Status - Segmentation Fault

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



/usr/bin/ceph is a python script so it's not segfaulting but some binary it's
launching is and there doesn't appear to be much information about it in the
log you uploaded.

Are you able to capture a core file and generate a stack trace from gdb?

The following may help to get some data.

$ ulimit -c unlimited
$ ceph -s
$ ls core.*   // This should list a recently made core file
$ file core.XXX
// Now run gdb with the output of the previous "file" command 
$ gdb -c core.XXX  $(which binary_name) -batch -ex "thr apply all bt"
$ ulimit -c 0

You may need debuginfo for the relevant binary and libraries installed to get
good stack traces but it's something you can try.

For example.

$ ulimit -c unlimited
$ sleep 100 &
[1] 32056
$ kill -SIGSEGV 32056
$ ls core.*
core.32056
[1]+  Segmentation fault      (core dumped) sleep 100
$ file core.32056 
core.32056: ELF 64-bit LSB core file x86-64, version 1 (SYSV), SVR4-style, from 'sleep 100'
$ gdb -c core.32056 $(which sleep) -batch -ex "thr apply all bt"
[New LWP 32056]

warning: the debug information found in "/usr/lib/debug//lib64/libc-2.22.so.debug" does not match "/lib64/libc.so.6" (CRC mismatch).


warning: the debug information found in "/usr/lib/debug//usr/lib64/libc-2.22.so.debug" does not match "/lib64/libc.so.6" (CRC mismatch).


warning: the debug information found in "/usr/lib/debug//lib64/ld-2.22.so.debug" does not match "/lib64/ld-linux-x86-64.so.2" (CRC mismatch).


warning: the debug information found in "/usr/lib/debug//usr/lib64/ld-2.22.so.debug" does not match "/lib64/ld-linux-x86-64.so.2" (CRC mismatch).

Core was generated by `sleep 100'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0  0x00007f1fd99e84b0 in __nanosleep_nocancel () from /lib64/libc.so.6

Thread 1 (LWP 32056):
#0  0x00007f1fd99e84b0 in __nanosleep_nocancel () from /lib64/libc.so.6
#1  0x00005641e10ba29f in rpl_nanosleep ()
#2  0x00005641e10ba100 in xnanosleep ()
#3  0x00005641e10b7a1d in main ()

$ ulimit -c 0

HTH,
Brad

----- Original Message -----
> From: "Mathias Buresch" <mathias.buresch@xxxxxxxxxxxx>
> To: ceph-users@xxxxxxxx
> Sent: Monday, 23 May, 2016 9:41:51 PM
> Subject:  Ceph Status - Segmentation Fault
> 
> Hi there,
> I was updating Ceph to 0.94.7 and now I am getting segmantation faults.
> 
> When getting status via "ceph -s" or "ceph health detail" I am getting
> an error "Segmentation fault".
> 
> I have only two Monitor Deamon.. but didn't had any problems yet with
> that.. maybe they maintenance time was too long this time..?!
> 
> When getting the status via admin socket I get following for both:
> 
> ceph daemon mon.pix01 mon_status
> {
>     "name": "pix01",
>     "rank": 0,
>     "state": "leader",
>     "election_epoch": 226,
>     "quorum": [
>         0,
>         1
>     ],
>     "outside_quorum": [],
>     "extra_probe_peers": [],
>     "sync_provider": [],
>     "monmap": {
>         "epoch": 1,
>         "fsid": "28af67eb-4060-4770-ac1d-d2be493877af",
>         "modified": "2014-11-12 15:44:27.182395",
>         "created": "2014-11-12 15:44:27.182395",
>         "mons": [
>             {
>                 "rank": 0,
>                 "name": "pix01",
>                 "addr": "x.x.x.x:6789\/0"
>             },
>             {
>                 "rank": 1,
>                 "name": "pix02",
>                 "addr": "x.x.x.x:6789\/0"
>             }
>         ]
>     }
> }
> 
> ceph daemon mon.pix02 mon_status
> {
>     "name": "pix02",
>     "rank": 1,
>     "state": "peon",
>     "election_epoch": 226,
>     "quorum": [
>         0,
>         1
>     ],
>     "outside_quorum": [],
>     "extra_probe_peers": [],
>     "sync_provider": [],
>     "monmap": {
>         "epoch": 1,
>         "fsid": "28af67eb-4060-4770-ac1d-d2be493877af",
>         "modified": "2014-11-12 15:44:27.182395",
>         "created": "2014-11-12 15:44:27.182395",
>         "mons": [
>             {
>                 "rank": 0,
>                 "name": "pix01",
>                 "addr": "x.x.x.x:6789\/0"
>             },
>             {
>                 "rank": 1,
>                 "name": "pix02",
>                 "addr": "x.x.x.x:6789\/0"
>             }
>         ]
>     }
> }
> 
> Please found the logs with higher debug level attached to this email.
> 
> 
> Kind regards
> Mathias
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux