manually remove problematic snapset: ceph-osd crashes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

my ceph-osd luminous are crashing with a segmentation fault while
backfilling.

Is there any way to manually remove the problematic "data"?

    -1> 2018-01-16 20:32:50.001722 7f27d53fe700  0 osd.86 pg_epoch:
917877 pg[3.80e( v 917875'69934125 (917365'69924082,917875'69934125] lb
3:7018abae:::rbd_data.1ba91116b8b4567.0000000000004362:head (bitwise)
local-lis/les=913221/913222 n=895 ec=15/15 lis/c 913221/909473 les/c/f
913222/909474/0 917852/917852/917219) [50,54,86]/[54] r=-1 lpr=917852
pi=[909473,917852)/11 luod=0'0 crt=917875'69934125 lcod 917875'69934124
active+remapped]  snapset b0cee=[b0cee]:{} legacy_snaps []
     0> 2018-01-16 20:32:50.004728 7f27d53fe700 -1 *** Caught signal
(Segmentation fault) **
 in thread 7f27d53fe700 thread_name:tp_osd_tp

 ceph version 12.2.2-93-gd6da8d7
(d6da8d77a4b2220e6bdd61e4bdd911a9cd91946c) luminous (stable)
 1: (()+0xa43dec) [0x563de6597dec]
 2: (()+0xf890) [0x7f282f7fc890]
 3: (std::_Rb_tree_iterator<snapid_t> std::_Rb_tree<snapid_t, snapid_t,
std::_Identity<snapid_t>, std::less<snapid_t>, std::allocator<snapid_t>
>::_M_insert_unique_<snapid_t&>(std::_Rb_tree_const_iterator<snapid_t>,
snapid_t&)+0x40) [0x563de612f6c0]
 4: (PrimaryLogPG::on_local_recover(hobject_t const&, ObjectRecoveryInfo
const&, std::shared_ptr<ObjectContext>, bool,
ObjectStore::Transaction*)+0xaae) [0x563de6184fee]
 5: (ReplicatedBackend::handle_push(pg_shard_t, PushOp const&,
PushReplyOp*, ObjectStore::Transaction*)+0x31d) [0x563de62f71dd]
 6: (ReplicatedBackend::_do_push(boost::intrusive_ptr<OpRequest>)+0x18f)
[0x563de62f747f]
 7:
(ReplicatedBackend::_handle_message(boost::intrusive_ptr<OpRequest>)+0x2d1)
[0x563de6307521]
 8: (PGBackend::handle_message(boost::intrusive_ptr<OpRequest>)+0x50)
[0x563de622ce40]
 9: (PrimaryLogPG::do_request(boost::intrusive_ptr<OpRequest>&,
ThreadPool::TPHandle&)+0x77b) [0x563de619914b]
 10: (OSD::dequeue_op(boost::intrusive_ptr<PG>,
boost::intrusive_ptr<OpRequest>, ThreadPool::TPHandle&)+0x3f7)
[0x563de6025bc7]
 11: (PGQueueable::RunVis::operator()(boost::intrusive_ptr<OpRequest>
const&)+0x57) [0x563de629d947]
 12: (OSD::ShardedOpWQ::_process(unsigned int,
ceph::heartbeat_handle_d*)+0x108c) [0x563de6054d1c]
 13: (ShardedThreadPool::shardedthreadpool_worker(unsigned int)+0x88d)
[0x563de65e0e6d]
 14: (ShardedThreadPool::WorkThreadSharded::entry()+0x10) [0x563de65e2e30]
 15: (()+0x8064) [0x7f282f7f5064]
 16: (clone()+0x6d) [0x7f282e8e962d]
 NOTE: a copy of the executable, or `objdump -rdS <executable>` is
needed to interpret this.




Greets,
Stefan
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux