Hello,
We have run into an OSD crash this weekend with the following dump. Please advise what this could be.
Best regards,
Alex
2015-09-07 14:55:01.345638 7fae6c158700 0 -- 10.80.4.25:6830/2003934 >> 10.80.4.15:6813/5003974 pipe(0x1dd73000 sd=257 :6830 s=2 pgs=14271 cs=251 l=0 c=0x10d34580).fault with nothing to send, going to standby
2015-09-07 14:56:16.948998 7fae643e8700 -1 *** Caught signal (Segmentation fault) **
in thread 7fae643e8700
ceph version 0.94.2 (5fb85614ca8f354284c713a2f9c610860720bbf3)
1: /usr/bin/ceph-osd() [0xacb3ba]
2: (()+0x10340) [0x7faea044e340]
3: (tcmalloc::ThreadCache::ReleaseToCentralCache(tcmalloc::ThreadCache::FreeList*, unsigned long, int)+0x103) [0x7faea067fac3]
4: (tcmalloc::ThreadCache::ListTooLong(tcmalloc::ThreadCache::FreeList*, unsigned long)+0x1b) [0x7faea067fb7b]
5: (operator delete(void*)+0x1f8) [0x7faea068ef68]
6: (std::_Rb_tree<int, std::pair<int const, std::list<Message*, std::allocator<Message*> > >, std::_Select1st<std::pair<int const, std::list<Message*, std::allocator<Message*> > > >, std::less<int>, std::allocator<std::pair<int const, std::list<Message*, std::allocator<Message*> > > > >::_M_erase(std::_Rb_tree_node<std::pair<int const, std::list<Message*, std::allocator<Message*> > > >*)+0x58) [0xca2438]
7: (std::_Rb_tree<int, std::pair<int const, std::list<Message*, std::allocator<Message*> > >, std::_Select1st<std::pair<int const, std::list<Message*, std::allocator<Message*> > > >, std::less<int>, std::allocator<std::pair<int const, std::list<Message*, std::allocator<Message*> > > > >::erase(int const&)+0xdf) [0xca252f]
8: (Pipe::writer()+0x93c) [0xca097c]
9: (Pipe::Writer::entry()+0xd) [0xca40dd]
10: (()+0x8182) [0x7faea0446182]
11: (clone()+0x6d) [0x7fae9e9b100d]
NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
--- begin dump of recent events ---
-10000> 2015-08-20 05:32:32.454940 7fae8e897700 0 -- 10.80.4.25:6830/2003934 >> 10.80.4.15:6806/4003754 pipe(0x1992d000 sd=142 :6830 s=0 pgs=0 cs=0 l=0 c=0x12bf5700).accept connect_seq 816 vs existing 815 state standby
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com