[JEWEL] OSD Crash - Tier Cache

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

for 3 days ago, at morning, I had 3 OSD crashed...

Ok. just restart... but next night, again.

It's a ceph cluster in production, no change was done before issue. No change.

So OSD which crashed were NVME disk. So, I checked disk with smart. All work fine. I zeroed all disk and recreate OSD : some hours later : same issue.

It's only NVMe disk with 3 replicat, configured as backwrite cache for an erasure code pool.

I start some bench...   

With a lot of read, an OSD crashed.

My log for crashed osd :

     -1> 2017-10-14 14:30:51.480776 7f1f2bacf700  1 osd.52 pg_epoch: 69549 pg[54.f1( v 69473'28361 (61738'25361,69473'28361] local-les=69529 n=12 ec=56693 les/c/f 69529/69529/66447 69527/69528/69528) [52,50,48] r=0 lpr=69528 crt=69473'28361 mlcod 0'0 active+clean] hit_set_trim 54:8f000000:.ceph-internal::hit_set_54.f1_archive_2017-09-25 04%3a15%3a55.433997Z_2017-09-25 10%3a14%3a00.884483Z:head not found
     0> 2017-10-14 14:30:51.487640 7f1f2bacf700 -1 osd/ReplicatedPG.cc: In function 'void ReplicatedPG::hit_set_trim(ReplicatedPG::OpContextUPtr&, unsigned int)' thread 7f1f2bacf700 time 2017-10-14 14:30:51.484663
osd/ReplicatedPG.cc: 11782: FAILED assert(obc)

 ceph version 10.2.10 (5dc1e4c05cb68dbf62ae6fce3f0700e4654fdbbe)
 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x85) [0x55e72f4ac9e5]
 2: (ReplicatedPG::hit_set_trim(std::unique_ptr<ReplicatedPG::OpContext, std::default_delete<ReplicatedPG::OpContext> >&, unsigned int)+0x6dd) [0x55e72ef8652d]
 3: (ReplicatedPG::hit_set_persist()+0xd7c) [0x55e72ef891bc]
 4: (ReplicatedPG::do_op(std::shared_ptr<OpRequest>&)+0x1a92) [0x55e72efa7be2]
 5: (ReplicatedPG::do_request(std::shared_ptr<OpRequest>&, ThreadPool::TPHandle&)+0x747) [0x55e72ef648a7]
 6: (OSD::dequeue_op(boost::intrusive_ptr<PG>, std::shared_ptr<OpRequest>, ThreadPool::TPHandle&)+0x41d) [0x55e72ee17bad]
 7: (PGQueueable::RunVis::operator()(std::shared_ptr<OpRequest>&)+0x6d) [0x55e72ee17dfd]
 8: (OSD::ShardedOpWQ::_process(unsigned int, ceph::heartbeat_handle_d*)+0x77b) [0x55e72ee1b7db]
 9: (ShardedThreadPool::shardedthreadpool_worker(unsigned int)+0x887) [0x55e72f49c987]
 10: (ShardedThreadPool::WorkThreadSharded::entry()+0x10) [0x55e72f49e8f0]
 11: (()+0x7e25) [0x7f1f4fe8ee25]
 12: (clone()+0x6d) [0x7f1f4e51834d]
 NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.

OSD continue crahing...

Only fixe is to remove OSD and recreate it...

But with an another high read, an another OSD Crashe...

So, try to use other OSD for this pool : same effect.

I need a solution : maybe, recreate an another pool and stop all rbd and change this cache pool with a new one ? any idee in order to fix this ?

Thanks for your help.

rest of log :

================

73709551615, dirty_divergent_priors: false, divergent_priors: 0, writeout_from: 69549'25366, trimmed:
    -2> 2017-10-14 14:30:51.423621 7f1f27ac7700  5 write_log with: dirty_to: 0'0, dirty_from: 4294967295'18446744073709551615, dirty_divergent_priors: false, divergent_priors: 0, writeout_from: 69549'25379, trimmed:
    -1> 2017-10-14 14:30:51.480776 7f1f2bacf700  1 osd.52 pg_epoch: 69549 pg[54.f1( v 69473'28361 (61738'25361,69473'28361] local-les=69529 n=12 ec=56693 les/c/f 69529/69529/66447 69527/69528/69528) [52,50,48] r=0 lpr=69528 crt=69473'28361 mlcod 0'0 active+clean] hit_set_trim 54:8f000000:.ceph-internal::hit_set_54.f1_archive_2017-09-25 04%3a15%3a55.433997Z_2017-09-25 10%3a14%3a00.884483Z:head not found
     0> 2017-10-14 14:30:51.487640 7f1f2bacf700 -1 osd/ReplicatedPG.cc: In function 'void ReplicatedPG::hit_set_trim(ReplicatedPG::OpContextUPtr&, unsigned int)' thread 7f1f2bacf700 time 2017-10-14 14:30:51.484663
osd/ReplicatedPG.cc: 11782: FAILED assert(obc)

 ceph version 10.2.10 (5dc1e4c05cb68dbf62ae6fce3f0700e4654fdbbe)
 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x85) [0x55e72f4ac9e5]
 2: (ReplicatedPG::hit_set_trim(std::unique_ptr<ReplicatedPG::OpContext, std::default_delete<ReplicatedPG::OpContext> >&, unsigned int)+0x6dd) [0x55e72ef8652d]
 3: (ReplicatedPG::hit_set_persist()+0xd7c) [0x55e72ef891bc]
 4: (ReplicatedPG::do_op(std::shared_ptr<OpRequest>&)+0x1a92) [0x55e72efa7be2]
 5: (ReplicatedPG::do_request(std::shared_ptr<OpRequest>&, ThreadPool::TPHandle&)+0x747) [0x55e72ef648a7]
 6: (OSD::dequeue_op(boost::intrusive_ptr<PG>, std::shared_ptr<OpRequest>, ThreadPool::TPHandle&)+0x41d) [0x55e72ee17bad]
 7: (PGQueueable::RunVis::operator()(std::shared_ptr<OpRequest>&)+0x6d) [0x55e72ee17dfd]
 8: (OSD::ShardedOpWQ::_process(unsigned int, ceph::heartbeat_handle_d*)+0x77b) [0x55e72ee1b7db]
 9: (ShardedThreadPool::shardedthreadpool_worker(unsigned int)+0x887) [0x55e72f49c987]
 10: (ShardedThreadPool::WorkThreadSharded::entry()+0x10) [0x55e72f49e8f0]
 11: (()+0x7e25) [0x7f1f4fe8ee25]
 12: (clone()+0x6d) [0x7f1f4e51834d]
 NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.

--- logging levels ---
   0/ 5 none
   0/ 0 lockdep
   0/ 0 context
   0/ 0 crush
   0/ 0 mds
   0/ 0 mds_balancer
   0/ 0 mds_locker
   0/ 0 mds_log
   0/ 0 mds_log_expire
   0/ 0 mds_migrator
   0/ 0 buffer
   0/ 0 timer
   0/ 1 filer
   0/ 1 striper
   0/ 1 objecter
   0/ 5 rados
   0/ 5 rbd
   0/ 5 rbd_mirror
   0/ 5 rbd_replay
   0/ 0 journaler
   0/ 5 objectcacher
   0/ 5 client
   5/ 5 osd
   0/ 0 optracker
   0/ 0 objclass
   0/ 0 filestore
   0/ 0 journal
   0/ 0 ms
   0/ 0 mon
   0/ 0 monc
   0/ 0 paxos
   0/ 0 tp
   0/ 0 auth
   0/ 0 crypto
   0/ 0 finisher
   0/ 0 heartbeatmap
   0/ 0 perfcounter
   0/ 0 rgw
   0/ 0 civetweb
   0/ 0 javaclient
   0/ 0 asok
   0/ 0 throttle
   0/ 0 refs
   0/ 0 xio
   0/ 0 compressor
   0/ 0 newstore
   0/ 0 bluestore
   0/ 0 bluefs
   0/ 0 bdev
   0/ 0 kstore
   0/ 0 rocksdb
   0/ 0 leveldb
   0/ 0 kinetic
   0/ 0 fuse
  -2/-2 (syslog threshold)
  -1/-1 (stderr threshold)
  max_recent     10000
  max_new         1000
  log_file /var/log/ceph/ceph-osd.52.log
--- end dump of recent events ---
2017-10-14 14:30:51.511426 7f1f2bacf700 -1 *** Caught signal (Aborted) **
 in thread 7f1f2bacf700 thread_name:tp_osd_tp

 ceph version 10.2.10 (5dc1e4c05cb68dbf62ae6fce3f0700e4654fdbbe)
 1: (()+0x92c18a) [0x55e72f3af18a]
 2: (()+0xf5e0) [0x7f1f4fe965e0]
 3: (gsignal()+0x37) [0x7f1f4e4551f7]
 4: (abort()+0x148) [0x7f1f4e4568e8]
 5: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x267) [0x55e72f4acbc7]
 6: (ReplicatedPG::hit_set_trim(std::unique_ptr<ReplicatedPG::OpContext, std::default_delete<ReplicatedPG::OpContext> >&, unsigned int)+0x6dd) [0x55e72ef8652d]
 7: (ReplicatedPG::hit_set_persist()+0xd7c) [0x55e72ef891bc]
 8: (ReplicatedPG::do_op(std::shared_ptr<OpRequest>&)+0x1a92) [0x55e72efa7be2]
 9: (ReplicatedPG::do_request(std::shared_ptr<OpRequest>&, ThreadPool::TPHandle&)+0x747) [0x55e72ef648a7]
 10: (OSD::dequeue_op(boost::intrusive_ptr<PG>, std::shared_ptr<OpRequest>, ThreadPool::TPHandle&)+0x41d) [0x55e72ee17bad]
 11: (PGQueueable::RunVis::operator()(std::shared_ptr<OpRequest>&)+0x6d) [0x55e72ee17dfd]
 12: (OSD::ShardedOpWQ::_process(unsigned int, ceph::heartbeat_handle_d*)+0x77b) [0x55e72ee1b7db]
 13: (ShardedThreadPool::shardedthreadpool_worker(unsigned int)+0x887) [0x55e72f49c987]
 14: (ShardedThreadPool::WorkThreadSharded::entry()+0x10) [0x55e72f49e8f0]
 15: (()+0x7e25) [0x7f1f4fe8ee25]
 16: (clone()+0x6d) [0x7f1f4e51834d]
 NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.

--- begin dump of recent events ---
     0> 2017-10-14 14:30:51.511426 7f1f2bacf700 -1 *** Caught signal (Aborted) **
 in thread 7f1f2bacf700 thread_name:tp_osd_tp

 ceph version 10.2.10 (5dc1e4c05cb68dbf62ae6fce3f0700e4654fdbbe)
 1: (()+0x92c18a) [0x55e72f3af18a]
 2: (()+0xf5e0) [0x7f1f4fe965e0]
 3: (gsignal()+0x37) [0x7f1f4e4551f7]
 4: (abort()+0x148) [0x7f1f4e4568e8]
 5: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x267) [0x55e72f4acbc7]
 6: (ReplicatedPG::hit_set_trim(std::unique_ptr<ReplicatedPG::OpContext, std::default_delete<ReplicatedPG::OpContext> >&, unsigned int)+0x6dd) [0x55e72ef8652d]
 7: (ReplicatedPG::hit_set_persist()+0xd7c) [0x55e72ef891bc]
 8: (ReplicatedPG::do_op(std::shared_ptr<OpRequest>&)+0x1a92) [0x55e72efa7be2]
 9: (ReplicatedPG::do_request(std::shared_ptr<OpRequest>&, ThreadPool::TPHandle&)+0x747) [0x55e72ef648a7]
 10: (OSD::dequeue_op(boost::intrusive_ptr<PG>, std::shared_ptr<OpRequest>, ThreadPool::TPHandle&)+0x41d) [0x55e72ee17bad]
 11: (PGQueueable::RunVis::operator()(std::shared_ptr<OpRequest>&)+0x6d) [0x55e72ee17dfd]
 12: (OSD::ShardedOpWQ::_process(unsigned int, ceph::heartbeat_handle_d*)+0x77b) [0x55e72ee1b7db]
 13: (ShardedThreadPool::shardedthreadpool_worker(unsigned int)+0x887) [0x55e72f49c987]
 14: (ShardedThreadPool::WorkThreadSharded::entry()+0x10) [0x55e72f49e8f0]
 15: (()+0x7e25) [0x7f1f4fe8ee25]
 16: (clone()+0x6d) [0x7f1f4e51834d]
 NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.

--- logging levels ---
   0/ 5 none
   0/ 0 lockdep
   0/ 0 context
   0/ 0 crush
   0/ 0 mds
   0/ 0 mds_balancer
   0/ 0 mds_locker
   0/ 0 mds_log
   0/ 0 mds_log_expire
   0/ 0 mds_migrator
   0/ 0 buffer
   0/ 0 timer
   0/ 1 filer
   0/ 1 striper
   0/ 1 objecter
   0/ 5 rados
   0/ 5 rbd
   0/ 5 rbd_mirror
   0/ 5 rbd_replay
   0/ 0 journaler
   0/ 5 objectcacher
   0/ 5 client
   5/ 5 osd
   0/ 0 optracker
   0/ 0 objclass
   0/ 0 filestore
   0/ 0 journal
   0/ 0 ms
   0/ 0 mon
   0/ 0 monc
   0/ 0 paxos
   0/ 0 tp
   0/ 0 auth
   0/ 0 crypto
   0/ 0 finisher
   0/ 0 heartbeatmap
   0/ 0 perfcounter
   0/ 0 rgw
   0/ 0 civetweb
   0/ 0 javaclient
   0/ 0 asok
   0/ 0 throttle
   0/ 0 refs
   0/ 0 xio
   0/ 0 compressor
   0/ 0 newstore
   0/ 0 bluestore
   0/ 0 bluefs
   0/ 0 bdev
   0/ 0 kstore
   0/ 0 rocksdb
   0/ 0 leveldb
   0/ 0 kinetic
   0/ 0 fuse
  -2/-2 (syslog threshold)
  -1/-1 (stderr threshold)
  max_recent     10000
  max_new         1000
  log_file /var/log/ceph/ceph-osd.52.log
--- end dump of recent events ---
2017-10-14 14:31:11.771568 7f7bf6bfc800  0 set uid:gid to 167:167 (ceph:ceph)
2017-10-14 14:31:11.771584 7f7bf6bfc800  0 ceph version 10.2.10 (5dc1e4c05cb68dbf62ae6fce3f0700e4654fdbbe), process ceph-osd, pid 1211795
2017-10-14 14:31:11.773417 7f7bf6bfc800  0 pidfile_write: ignore empty --pid-file
2017-10-14 14:31:11.811114 7f7bf6bfc800  0 filestore(/var/lib/ceph/osd/ceph-52) backend xfs (magic 0x58465342)
2017-10-14 14:31:11.812553 7f7bf6bfc800  0 genericfilestorebackend(/var/lib/ceph/osd/ceph-52) detect_features: FIEMAP ioctl is disabled via 'filestore fiemap' config option
2017-10-14 14:31:11.812559 7f7bf6bfc800  0 genericfilestorebackend(/var/lib/ceph/osd/ceph-52) detect_features: SEEK_DATA/SEEK_HOLE is disabled via 'filestore seek data hole' config option
2017-10-14 14:31:11.812572 7f7bf6bfc800  0 genericfilestorebackend(/var/lib/ceph/osd/ceph-52) detect_features: splice is supported
2017-10-14 14:31:11.813628 7f7bf6bfc800  0 genericfilestorebackend(/var/lib/ceph/osd/ceph-52) detect_features: syncfs(2) syscall fully supported (by glibc and kernel)
2017-10-14 14:31:11.813657 7f7bf6bfc800  0 xfsfilestorebackend(/var/lib/ceph/osd/ceph-52) detect_feature: extsize is disabled by conf
2017-10-14 14:31:11.907673 7f7bf6bfc800  0 filestore(/var/lib/ceph/osd/ceph-52) mount: enabling WRITEAHEAD journal mode: checkpoint is not enabled
2017-10-14 14:31:11.970498 7f7bf6bfc800  0 <cls> cls/cephfs/cls_cephfs.cc:202: loading cephfs_size_scan
2017-10-14 14:31:11.970792 7f7bf6bfc800  0 <cls> cls/hello/cls_hello.cc:305: loading cls_hello
2017-10-14 14:31:11.981918 7f7bf6bfc800  0 osd.52 69549 crush map has features 2303210029056, adjusting msgr requires for clients
2017-10-14 14:31:11.981927 7f7bf6bfc800  0 osd.52 69549 crush map has features 2578087936000 was 8705, adjusting msgr requires for mons
2017-10-14 14:31:11.981936 7f7bf6bfc800  0 osd.52 69549 crush map has features 2578087936000, adjusting msgr requires for osds
2017-10-14 14:31:12.001989 7f7bf6bfc800  0 osd.52 69549 load_pgs
2017-10-14 14:31:13.670447 7f7bf6bfc800  0 osd.52 69549 load_pgs opened 199 pgs
2017-10-14 14:31:13.670498 7f7bf6bfc800  0 osd.52 69549 using 0 op queue with priority op cut off at 64.
2017-10-14 14:31:13.671259 7f7bf6bfc800 -1 osd.52 69549 log_to_monitors {default=true}
2017-10-14 14:31:13.680940 7f7bf6bfc800  0 osd.52 69549 done with init, starting boot process
2017-10-14 14:31:15.764323 7f7bbc194700  0 -- 192.168.44.222:6824/1211795 >> 192.168.44.223:6800/282955 pipe(0x55d266196800 sd=163 :6824 s=0 pgs=0 cs=0 l=0 c=0x55d265f76a80).accept connect_seq 0 vs existing 0 state wait
2017-10-14 14:31:15.765024 7f7bbaf82700  0 -- 192.168.44.222:6824/1211795 >> :/0 pipe(0x55d266276800 sd=176 :6824 s=0 pgs=0 cs=0 l=0 c=0x55d265f79780).accept failed to getpeername (107) Transport endpoint is not connected
2017-10-14 14:31:15.765451 7f7bbbb8e700  0 -- 192.168.44.222:6824/1211795 >> 192.168.44.223:6804/282958 pipe(0x55d266274000 sd=167 :6824 s=0 pgs=0 cs=0 l=0 c=0x55d265f79480).accept connect_seq 0 vs existing 0 state wait
2017-10-14 14:31:16.785118 7f7bd23c2700 -1 osd/ReplicatedPG.cc: In function 'void ReplicatedPG::hit_set_trim(ReplicatedPG::OpContextUPtr&, unsigned int)' thread 7f7bd23c2700 time 2017-10-14 14:31:16.782116
osd/ReplicatedPG.cc: 11782: FAILED assert(obc)

 ceph version 10.2.10 (5dc1e4c05cb68dbf62ae6fce3f0700e4654fdbbe)
 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x85) [0x55d23ed309e5]
 2: (ReplicatedPG::hit_set_trim(std::unique_ptr<ReplicatedPG::OpContext, std::default_delete<ReplicatedPG::OpContext> >&, unsigned int)+0x6dd) [0x55d23e80a52d]
 3: (ReplicatedPG::hit_set_persist()+0xd7c) [0x55d23e80d1bc]
 4: (ReplicatedPG::do_op(std::shared_ptr<OpRequest>&)+0x1a92) [0x55d23e82bbe2]
 5: (ReplicatedPG::do_request(std::shared_ptr<OpRequest>&, ThreadPool::TPHandle&)+0x747) [0x55d23e7e88a7]
 6: (OSD::dequeue_op(boost::intrusive_ptr<PG>, std::shared_ptr<OpRequest>, ThreadPool::TPHandle&)+0x41d) [0x55d23e69bbad]
 7: (PGQueueable::RunVis::operator()(std::shared_ptr<OpRequest>&)+0x6d) [0x55d23e69bdfd]
 8: (OSD::ShardedOpWQ::_process(unsigned int, ceph::heartbeat_handle_d*)+0x77b) [0x55d23e69f7db]
 9: (ShardedThreadPool::shardedthreadpool_worker(unsigned int)+0x887) [0x55d23ed20987]
 10: (ShardedThreadPool::WorkThreadSharded::entry()+0x10) [0x55d23ed228f0]
 11: (()+0x7e25) [0x7f7bf5342e25]
 12: (clone()+0x6d) [0x7f7bf39cc34d]
 NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.

--- begin dump of recent events ---
   -37> 2017-10-14 14:31:11.768660 7f7bf6bfc800  5 asok(0x55d24a56c140) register_command perfcounters_dump hook 0x55d24a544030
   -36> 2017-10-14 14:31:11.768680 7f7bf6bfc800  5 asok(0x55d24a56c140) register_command 1 hook 0x55d24a544030
   -35> 2017-10-14 14:31:11.768684 7f7bf6bfc800  5 asok(0x55d24a56c140) register_command perf dump hook 0x55d24a544030
   -34> 2017-10-14 14:31:11.768687 7f7bf6bfc800  5 asok(0x55d24a56c140) register_command perfcounters_schema hook 0x55d24a544030
   -33> 2017-10-14 14:31:11.768690 7f7bf6bfc800  5 asok(0x55d24a56c140) register_command 2 hook 0x55d24a544030
   -32> 2017-10-14 14:31:11.768692 7f7bf6bfc800  5 asok(0x55d24a56c140) register_command perf schema hook 0x55d24a544030
   -31> 2017-10-14 14:31:11.768696 7f7bf6bfc800  5 asok(0x55d24a56c140) register_command perf reset hook 0x55d24a544030
   -30> 2017-10-14 14:31:11.768698 7f7bf6bfc800  5 asok(0x55d24a56c140) register_command config show hook 0x55d24a544030
   -29> 2017-10-14 14:31:11.768702 7f7bf6bfc800  5 asok(0x55d24a56c140) register_command config set hook 0x55d24a544030
   -28> 2017-10-14 14:31:11.768706 7f7bf6bfc800  5 asok(0x55d24a56c140) register_command config get hook 0x55d24a544030
   -27> 2017-10-14 14:31:11.768709 7f7bf6bfc800  5 asok(0x55d24a56c140) register_command config diff hook 0x55d24a544030
   -26> 2017-10-14 14:31:11.768712 7f7bf6bfc800  5 asok(0x55d24a56c140) register_command log flush hook 0x55d24a544030
   -25> 2017-10-14 14:31:11.768714 7f7bf6bfc800  5 asok(0x55d24a56c140) register_command log dump hook 0x55d24a544030
   -24> 2017-10-14 14:31:11.768719 7f7bf6bfc800  5 asok(0x55d24a56c140) register_command log reopen hook 0x55d24a544030
   -23> 2017-10-14 14:31:11.771568 7f7bf6bfc800  0 set uid:gid to 167:167 (ceph:ceph)
   -22> 2017-10-14 14:31:11.771584 7f7bf6bfc800  0 ceph version 10.2.10 (5dc1e4c05cb68dbf62ae6fce3f0700e4654fdbbe), process ceph-osd, pid 1211795
   -21> 2017-10-14 14:31:11.773417 7f7bf6bfc800  0 pidfile_write: ignore empty --pid-file
   -20> 2017-10-14 14:31:11.811114 7f7bf6bfc800  0 filestore(/var/lib/ceph/osd/ceph-52) backend xfs (magic 0x58465342)
   -19> 2017-10-14 14:31:11.812553 7f7bf6bfc800  0 genericfilestorebackend(/var/lib/ceph/osd/ceph-52) detect_features: FIEMAP ioctl is disabled via 'filestore fiemap' config option
   -18> 2017-10-14 14:31:11.812559 7f7bf6bfc800  0 genericfilestorebackend(/var/lib/ceph/osd/ceph-52) detect_features: SEEK_DATA/SEEK_HOLE is disabled via 'filestore seek data hole' config option
   -17> 2017-10-14 14:31:11.812572 7f7bf6bfc800  0 genericfilestorebackend(/var/lib/ceph/osd/ceph-52) detect_features: splice is supported
   -16> 2017-10-14 14:31:11.813628 7f7bf6bfc800  0 genericfilestorebackend(/var/lib/ceph/osd/ceph-52) detect_features: syncfs(2) syscall fully supported (by glibc and kernel)
   -15> 2017-10-14 14:31:11.813657 7f7bf6bfc800  0 xfsfilestorebackend(/var/lib/ceph/osd/ceph-52) detect_feature: extsize is disabled by conf
   -14> 2017-10-14 14:31:11.907673 7f7bf6bfc800  0 filestore(/var/lib/ceph/osd/ceph-52) mount: enabling WRITEAHEAD journal mode: checkpoint is not enabled
   -13> 2017-10-14 14:31:11.970498 7f7bf6bfc800  0 <cls> cls/cephfs/cls_cephfs.cc:202: loading cephfs_size_scan
   -12> 2017-10-14 14:31:11.970792 7f7bf6bfc800  0 <cls> cls/hello/cls_hello.cc:305: loading cls_hello
   -11> 2017-10-14 14:31:11.981918 7f7bf6bfc800  0 osd.52 69549 crush map has features 2303210029056, adjusting msgr requires for clients
   -10> 2017-10-14 14:31:11.981927 7f7bf6bfc800  0 osd.52 69549 crush map has features 2578087936000 was 8705, adjusting msgr requires for mons
    -9> 2017-10-14 14:31:11.981936 7f7bf6bfc800  0 osd.52 69549 crush map has features 2578087936000, adjusting msgr requires for osds
    -8> 2017-10-14 14:31:12.001989 7f7bf6bfc800  0 osd.52 69549 load_pgs
    -7> 2017-10-14 14:31:13.670447 7f7bf6bfc800  0 osd.52 69549 load_pgs opened 199 pgs
    -6> 2017-10-14 14:31:13.670498 7f7bf6bfc800  0 osd.52 69549 using 0 op queue with priority op cut off at 64.
    -5> 2017-10-14 14:31:13.671259 7f7bf6bfc800 -1 osd.52 69549 log_to_monitors {default=true}
    -4> 2017-10-14 14:31:13.680940 7f7bf6bfc800  0 osd.52 69549 done with init, starting boot process
    -3> 2017-10-14 14:31:15.764323 7f7bbc194700  0 -- 192.168.44.222:6824/1211795 >> 192.168.44.223:6800/282955 pipe(0x55d266196800 sd=163 :6824 s=0 pgs=0 cs=0 l=0 c=0x55d265f76a80).accept connect_seq 0 vs existing 0 state wait
    -2> 2017-10-14 14:31:15.765024 7f7bbaf82700  0 -- 192.168.44.222:6824/1211795 >> :/0 pipe(0x55d266276800 sd=176 :6824 s=0 pgs=0 cs=0 l=0 c=0x55d265f79780).accept failed to getpeername (107) Transport endpoint is not connected
    -1> 2017-10-14 14:31:15.765451 7f7bbbb8e700  0 -- 192.168.44.222:6824/1211795 >> 192.168.44.223:6804/282958 pipe(0x55d266274000 sd=167 :6824 s=0 pgs=0 cs=0 l=0 c=0x55d265f79480).accept connect_seq 0 vs existing 0 state wait
     0> 2017-10-14 14:31:16.785118 7f7bd23c2700 -1 osd/ReplicatedPG.cc: In function 'void ReplicatedPG::hit_set_trim(ReplicatedPG::OpContextUPtr&, unsigned int)' thread 7f7bd23c2700 time 2017-10-14 14:31:16.782116
osd/ReplicatedPG.cc: 11782: FAILED assert(obc)

 ceph version 10.2.10 (5dc1e4c05cb68dbf62ae6fce3f0700e4654fdbbe)
 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x85) [0x55d23ed309e5]
 2: (ReplicatedPG::hit_set_trim(std::unique_ptr<ReplicatedPG::OpContext, std::default_delete<ReplicatedPG::OpContext> >&, unsigned int)+0x6dd) [0x55d23e80a52d]
 3: (ReplicatedPG::hit_set_persist()+0xd7c) [0x55d23e80d1bc]
 4: (ReplicatedPG::do_op(std::shared_ptr<OpRequest>&)+0x1a92) [0x55d23e82bbe2]
 5: (ReplicatedPG::do_request(std::shared_ptr<OpRequest>&, ThreadPool::TPHandle&)+0x747) [0x55d23e7e88a7]
 6: (OSD::dequeue_op(boost::intrusive_ptr<PG>, std::shared_ptr<OpRequest>, ThreadPool::TPHandle&)+0x41d) [0x55d23e69bbad]
 7: (PGQueueable::RunVis::operator()(std::shared_ptr<OpRequest>&)+0x6d) [0x55d23e69bdfd]
 8: (OSD::ShardedOpWQ::_process(unsigned int, ceph::heartbeat_handle_d*)+0x77b) [0x55d23e69f7db]
 9: (ShardedThreadPool::shardedthreadpool_worker(unsigned int)+0x887) [0x55d23ed20987]
 10: (ShardedThreadPool::WorkThreadSharded::entry()+0x10) [0x55d23ed228f0]
 11: (()+0x7e25) [0x7f7bf5342e25]
 12: (clone()+0x6d) [0x7f7bf39cc34d]
 NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.

--- logging levels ---
   0/ 5 none
   0/ 0 lockdep
   0/ 0 context
   0/ 0 crush
   0/ 0 mds
   0/ 0 mds_balancer
   0/ 0 mds_locker
   0/ 0 mds_log
   0/ 0 mds_log_expire
   0/ 0 mds_migrator
   0/ 0 buffer
   0/ 0 timer
   0/ 1 filer
   0/ 1 striper
   0/ 1 objecter
   0/ 5 rados
   0/ 5 rbd
   0/ 5 rbd_mirror
   0/ 5 rbd_replay
   0/ 0 journaler
   0/ 5 objectcacher
   0/ 5 client
   0/ 0 osd
   0/ 0 optracker
   0/ 0 objclass
   0/ 0 filestore
   0/ 0 journal
   0/ 0 ms
   0/ 0 mon
   0/ 0 monc
   0/ 0 paxos
   0/ 0 tp
   0/ 0 auth
   0/ 0 crypto
   0/ 0 finisher
   0/ 0 heartbeatmap
   0/ 0 perfcounter
   0/ 0 rgw
   0/ 0 civetweb
   0/ 0 javaclient
   0/ 0 asok
   0/ 0 throttle
   0/ 0 refs
   0/ 0 xio
   0/ 0 compressor
   0/ 0 newstore
   0/ 0 bluestore
   0/ 0 bluefs
   0/ 0 bdev
   0/ 0 kstore
   0/ 0 rocksdb
   0/ 0 leveldb
   0/ 0 kinetic
   0/ 0 fuse
  -2/-2 (syslog threshold)
  -1/-1 (stderr threshold)
  max_recent     10000
  max_new         1000
  log_file /var/log/ceph/ceph-osd.52.log
--- end dump of recent events ---
2017-10-14 14:31:16.788186 7f7bd23c2700 -1 *** Caught signal (Aborted) **
 in thread 7f7bd23c2700 thread_name:tp_osd_tp

 ceph version 10.2.10 (5dc1e4c05cb68dbf62ae6fce3f0700e4654fdbbe)
 1: (()+0x92c18a) [0x55d23ec3318a]
 2: (()+0xf5e0) [0x7f7bf534a5e0]
 3: (gsignal()+0x37) [0x7f7bf39091f7]
 4: (abort()+0x148) [0x7f7bf390a8e8]
 5: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x267) [0x55d23ed30bc7]
 6: (ReplicatedPG::hit_set_trim(std::unique_ptr<ReplicatedPG::OpContext, std::default_delete<ReplicatedPG::OpContext> >&, unsigned int)+0x6dd) [0x55d23e80a52d]
 7: (ReplicatedPG::hit_set_persist()+0xd7c) [0x55d23e80d1bc]
 8: (ReplicatedPG::do_op(std::shared_ptr<OpRequest>&)+0x1a92) [0x55d23e82bbe2]
 9: (ReplicatedPG::do_request(std::shared_ptr<OpRequest>&, ThreadPool::TPHandle&)+0x747) [0x55d23e7e88a7]
 10: (OSD::dequeue_op(boost::intrusive_ptr<PG>, std::shared_ptr<OpRequest>, ThreadPool::TPHandle&)+0x41d) [0x55d23e69bbad]
 11: (PGQueueable::RunVis::operator()(std::shared_ptr<OpRequest>&)+0x6d) [0x55d23e69bdfd]
 12: (OSD::ShardedOpWQ::_process(unsigned int, ceph::heartbeat_handle_d*)+0x77b) [0x55d23e69f7db]
 13: (ShardedThreadPool::shardedthreadpool_worker(unsigned int)+0x887) [0x55d23ed20987]
 14: (ShardedThreadPool::WorkThreadSharded::entry()+0x10) [0x55d23ed228f0]
 15: (()+0x7e25) [0x7f7bf5342e25]
 16: (clone()+0x6d) [0x7f7bf39cc34d]
 NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.

--- begin dump of recent events ---
     0> 2017-10-14 14:31:16.788186 7f7bd23c2700 -1 *** Caught signal (Aborted) **
 in thread 7f7bd23c2700 thread_name:tp_osd_tp

 ceph version 10.2.10 (5dc1e4c05cb68dbf62ae6fce3f0700e4654fdbbe)
 1: (()+0x92c18a) [0x55d23ec3318a]
 2: (()+0xf5e0) [0x7f7bf534a5e0]
 3: (gsignal()+0x37) [0x7f7bf39091f7]
 4: (abort()+0x148) [0x7f7bf390a8e8]
 5: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x267) [0x55d23ed30bc7]
 6: (ReplicatedPG::hit_set_trim(std::unique_ptr<ReplicatedPG::OpContext, std::default_delete<ReplicatedPG::OpContext> >&, unsigned int)+0x6dd) [0x55d23e80a52d]
 7: (ReplicatedPG::hit_set_persist()+0xd7c) [0x55d23e80d1bc]
 8: (ReplicatedPG::do_op(std::shared_ptr<OpRequest>&)+0x1a92) [0x55d23e82bbe2]
 9: (ReplicatedPG::do_request(std::shared_ptr<OpRequest>&, ThreadPool::TPHandle&)+0x747) [0x55d23e7e88a7]
 10: (OSD::dequeue_op(boost::intrusive_ptr<PG>, std::shared_ptr<OpRequest>, ThreadPool::TPHandle&)+0x41d) [0x55d23e69bbad]
 11: (PGQueueable::RunVis::operator()(std::shared_ptr<OpRequest>&)+0x6d) [0x55d23e69bdfd]
 12: (OSD::ShardedOpWQ::_process(unsigned int, ceph::heartbeat_handle_d*)+0x77b) [0x55d23e69f7db]
 13: (ShardedThreadPool::shardedthreadpool_worker(unsigned int)+0x887) [0x55d23ed20987]
 14: (ShardedThreadPool::WorkThreadSharded::entry()+0x10) [0x55d23ed228f0]
 15: (()+0x7e25) [0x7f7bf5342e25]
 16: (clone()+0x6d) [0x7f7bf39cc34d]
 NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.

--- logging levels ---
   0/ 5 none
   0/ 0 lockdep
   0/ 0 context
   0/ 0 crush
   0/ 0 mds
   0/ 0 mds_balancer
   0/ 0 mds_locker
   0/ 0 mds_log
   0/ 0 mds_log_expire
   0/ 0 mds_migrator
   0/ 0 buffer
   0/ 0 timer
   0/ 1 filer
   0/ 1 striper
   0/ 1 objecter
   0/ 5 rados
   0/ 5 rbd
   0/ 5 rbd_mirror
   0/ 5 rbd_replay
   0/ 0 journaler
   0/ 5 objectcacher
   0/ 5 client
   0/ 0 osd
   0/ 0 optracker
   0/ 0 objclass
   0/ 0 filestore
   0/ 0 journal
   0/ 0 ms
   0/ 0 mon
   0/ 0 monc
   0/ 0 paxos
   0/ 0 tp
   0/ 0 auth
   0/ 0 crypto
   0/ 0 finisher
   0/ 0 heartbeatmap
   0/ 0 perfcounter
   0/ 0 rgw
   0/ 0 civetweb
   0/ 0 javaclient
   0/ 0 asok
   0/ 0 throttle
   0/ 0 refs
   0/ 0 xio
   0/ 0 compressor
   0/ 0 newstore
   0/ 0 bluestore
   0/ 0 bluefs
   0/ 0 bdev
   0/ 0 kstore
   0/ 0 rocksdb
   0/ 0 leveldb
   0/ 0 kinetic
   0/ 0 fuse
  -2/-2 (syslog threshold)
  -1/-1 (stderr threshold)
  max_recent     10000
  max_new         1000
  log_file /var/log/ceph/ceph-osd.52.log
--- end dump of recent events ---
2017-10-14 14:31:36.998251 7f088dd89800  0 set uid:gid to 167:167 (ceph:ceph)
2017-10-14 14:31:36.998266 7f088dd89800  0 ceph version 10.2.10 (5dc1e4c05cb68dbf62ae6fce3f0700e4654fdbbe), process ceph-osd, pid 1212200
2017-10-14 14:31:36.999943 7f088dd89800  0 pidfile_write: ignore empty --pid-file
2017-10-14 14:31:37.031148 7f088dd89800  0 filestore(/var/lib/ceph/osd/ceph-52) backend xfs (magic 0x58465342)
2017-10-14 14:31:37.032600 7f088dd89800  0 genericfilestorebackend(/var/lib/ceph/osd/ceph-52) detect_features: FIEMAP ioctl is disabled via 'filestore fiemap' config option
2017-10-14 14:31:37.032605 7f088dd89800  0 genericfilestorebackend(/var/lib/ceph/osd/ceph-52) detect_features: SEEK_DATA/SEEK_HOLE is disabled via 'filestore seek data hole' config option
2017-10-14 14:31:37.032620 7f088dd89800  0 genericfilestorebackend(/var/lib/ceph/osd/ceph-52) detect_features: splice is supported
2017-10-14 14:31:37.033815 7f088dd89800  0 genericfilestorebackend(/var/lib/ceph/osd/ceph-52) detect_features: syncfs(2) syscall fully supported (by glibc and kernel)
2017-10-14 14:31:37.033853 7f088dd89800  0 xfsfilestorebackend(/var/lib/ceph/osd/ceph-52) detect_feature: extsize is disabled by conf
2017-10-14 14:31:37.069803 7f088dd89800  0 filestore(/var/lib/ceph/osd/ceph-52) mount: enabling WRITEAHEAD journal mode: checkpoint is not enabled
2017-10-14 14:31:37.145280 7f088dd89800  0 <cls> cls/cephfs/cls_cephfs.cc:202: loading cephfs_size_scan
2017-10-14 14:31:37.145420 7f088dd89800  0 <cls> cls/hello/cls_hello.cc:305: loading cls_hello
2017-10-14 14:31:37.150727 7f088dd89800  0 osd.52 69552 crush map has features 2303210029056, adjusting msgr requires for clients
2017-10-14 14:31:37.150735 7f088dd89800  0 osd.52 69552 crush map has features 2578087936000 was 8705, adjusting msgr requires for mons
2017-10-14 14:31:37.150743 7f088dd89800  0 osd.52 69552 crush map has features 2578087936000, adjusting msgr requires for osds
2017-10-14 14:31:37.170925 7f088dd89800  0 osd.52 69552 load_pgs
2017-10-14 14:31:38.922044 7f088dd89800  0 osd.52 69552 load_pgs opened 199 pgs
2017-10-14 14:31:38.922092 7f088dd89800  0 osd.52 69552 using 0 op queue with priority op cut off at 64.
2017-10-14 14:31:38.922873 7f088dd89800 -1 osd.52 69552 log_to_monitors {default=true}
2017-10-14 14:31:38.927395 7f088dd89800  0 osd.52 69552 done with init, starting boot process
2017-10-14 14:31:40.176119 7f085369d700  0 -- 192.168.44.222:6824/1212200 >> 192.168.44.221:6804/970395 pipe(0x563b2d001400 sd=160 :6824 s=0 pgs=0 cs=0 l=0 c=0x563b2cf80480).accept connect_seq 0 vs existing 0 state connecting
2017-10-14 14:31:40.176862 7f0852a91700  0 -- 192.168.44.222:6824/1212200 >> :/0 pipe(0x563b2d20f400 sd=170 :6824 s=0 pgs=0 cs=0 l=0 c=0x563b2cf81c80).accept failed to getpeername (107) Transport endpoint is not connected
2017-10-14 14:31:40.177534 7f085248b700  0 -- 192.168.44.222:6824/1212200 >> 192.168.44.223:6800/282955 pipe(0x563b2d210800 sd=173 :6824 s=0 pgs=0 cs=0 l=0 c=0x563b2cf81e00).accept connect_seq 0 vs existing 0 state connecting
2017-10-14 14:31:43.024772 7f084ea51700  0 -- 192.168.44.222:6824/1212200 >> 192.168.44.220:6824/4112 pipe(0x563b2d8b5400 sd=198 :6824 s=0 pgs=0 cs=0 l=0 c=0x563b2d335780).accept connect_seq 0 vs existing 0 state connecting
2017-10-14 14:31:43.024836 7f084eb52700  0 -- 192.168.44.222:6824/1212200 >> 192.168.44.222:6826/980397 pipe(0x563b2d8b4000 sd=31 :6824 s=0 pgs=0 cs=0 l=0 c=0x563b2d335600).accept connect_seq 0 vs existing 0 state connecting
2017-10-14 14:31:43.024905 7f084e84f700  0 -- 192.168.44.222:6824/1212200 >> 192.168.44.220:6808/3472 pipe(0x563b2d8b6800 sd=199 :6824 s=0 pgs=0 cs=0 l=0 c=0x563b2d335900).accept connect_seq 0 vs existing 0 state connecting
2017-10-14 14:31:43.025002 7f084e74e700  0 -- 192.168.44.222:6824/1212200 >> 192.168.44.220:6818/3874 pipe(0x563b2d8bd400 sd=201 :6824 s=0 pgs=0 cs=0 l=0 c=0x563b2d335a80).accept connect_seq 0 vs existing 0 state wait
2017-10-14 14:31:43.036252 7f0869bce700 -1 osd/ReplicatedPG.cc: In function 'void ReplicatedPG::hit_set_trim(ReplicatedPG::OpContextUPtr&, unsigned int)' thread 7f0869bce700 time 2017-10-14 14:31:43.033384
osd/ReplicatedPG.cc: 11782: FAILED assert(obc)

 ceph version 10.2.10 (5dc1e4c05cb68dbf62ae6fce3f0700e4654fdbbe)
 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x85) [0x563b06b429e5]
 2: (ReplicatedPG::hit_set_trim(std::unique_ptr<ReplicatedPG::OpContext, std::default_delete<ReplicatedPG::OpContext> >&, unsigned int)+0x6dd) [0x563b0661c52d]
 3: (ReplicatedPG::hit_set_persist()+0xd7c) [0x563b0661f1bc]
 4: (ReplicatedPG::do_op(std::shared_ptr<OpRequest>&)+0x1a92) [0x563b0663dbe2]
 5: (ReplicatedPG::do_request(std::shared_ptr<OpRequest>&, ThreadPool::TPHandle&)+0x747) [0x563b065fa8a7]
 6: (OSD::dequeue_op(boost::intrusive_ptr<PG>, std::shared_ptr<OpRequest>, ThreadPool::TPHandle&)+0x41d) [0x563b064adbad]
 7: (PGQueueable::RunVis::operator()(std::shared_ptr<OpRequest>&)+0x6d) [0x563b064addfd]
 8: (OSD::ShardedOpWQ::_process(unsigned int, ceph::heartbeat_handle_d*)+0x77b) [0x563b064b17db]
 9: (ShardedThreadPool::shardedthreadpool_worker(unsigned int)+0x887) [0x563b06b32987]
 10: (ShardedThreadPool::WorkThreadSharded::entry()+0x10) [0x563b06b348f0]
 11: (()+0x7e25) [0x7f088c4cfe25]
 12: (clone()+0x6d) [0x7f088ab5934d]
 NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.

--- begin dump of recent events ---
   -41> 2017-10-14 14:31:36.995391 7f088dd89800  5 asok(0x563b1169c140) register_command perfcounters_dump hook 0x563b11674030
   -40> 2017-10-14 14:31:36.995410 7f088dd89800  5 asok(0x563b1169c140) register_command 1 hook 0x563b11674030
   -39> 2017-10-14 14:31:36.995414 7f088dd89800  5 asok(0x563b1169c140) register_command perf dump hook 0x563b11674030
   -38> 2017-10-14 14:31:36.995418 7f088dd89800  5 asok(0x563b1169c140) register_command perfcounters_schema hook 0x563b11674030
   -37> 2017-10-14 14:31:36.995420 7f088dd89800  5 asok(0x563b1169c140) register_command 2 hook 0x563b11674030
   -36> 2017-10-14 14:31:36.995424 7f088dd89800  5 asok(0x563b1169c140) register_command perf schema hook 0x563b11674030
   -35> 2017-10-14 14:31:36.995427 7f088dd89800  5 asok(0x563b1169c140) register_command perf reset hook 0x563b11674030
   -34> 2017-10-14 14:31:36.995430 7f088dd89800  5 asok(0x563b1169c140) register_command config show hook 0x563b11674030
   -33> 2017-10-14 14:31:36.995434 7f088dd89800  5 asok(0x563b1169c140) register_command config set hook 0x563b11674030
   -32> 2017-10-14 14:31:36.995438 7f088dd89800  5 asok(0x563b1169c140) register_command config get hook 0x563b11674030
   -31> 2017-10-14 14:31:36.995440 7f088dd89800  5 asok(0x563b1169c140) register_command config diff hook 0x563b11674030
   -30> 2017-10-14 14:31:36.995443 7f088dd89800  5 asok(0x563b1169c140) register_command log flush hook 0x563b11674030
   -29> 2017-10-14 14:31:36.995447 7f088dd89800  5 asok(0x563b1169c140) register_command log dump hook 0x563b11674030
   -28> 2017-10-14 14:31:36.995451 7f088dd89800  5 asok(0x563b1169c140) register_command log reopen hook 0x563b11674030
   -27> 2017-10-14 14:31:36.998251 7f088dd89800  0 set uid:gid to 167:167 (ceph:ceph)
   -26> 2017-10-14 14:31:36.998266 7f088dd89800  0 ceph version 10.2.10 (5dc1e4c05cb68dbf62ae6fce3f0700e4654fdbbe), process ceph-osd, pid 1212200
   -25> 2017-10-14 14:31:36.999943 7f088dd89800  0 pidfile_write: ignore empty --pid-file
   -24> 2017-10-14 14:31:37.031148 7f088dd89800  0 filestore(/var/lib/ceph/osd/ceph-52) backend xfs (magic 0x58465342)
   -23> 2017-10-14 14:31:37.032600 7f088dd89800  0 genericfilestorebackend(/var/lib/ceph/osd/ceph-52) detect_features: FIEMAP ioctl is disabled via 'filestore fiemap' config option
   -22> 2017-10-14 14:31:37.032605 7f088dd89800  0 genericfilestorebackend(/var/lib/ceph/osd/ceph-52) detect_features: SEEK_DATA/SEEK_HOLE is disabled via 'filestore seek data hole' config option
   -21> 2017-10-14 14:31:37.032620 7f088dd89800  0 genericfilestorebackend(/var/lib/ceph/osd/ceph-52) detect_features: splice is supported
   -20> 2017-10-14 14:31:37.033815 7f088dd89800  0 genericfilestorebackend(/var/lib/ceph/osd/ceph-52) detect_features: syncfs(2) syscall fully supported (by glibc and kernel)
   -19> 2017-10-14 14:31:37.033853 7f088dd89800  0 xfsfilestorebackend(/var/lib/ceph/osd/ceph-52) detect_feature: extsize is disabled by conf
   -18> 2017-10-14 14:31:37.069803 7f088dd89800  0 filestore(/var/lib/ceph/osd/ceph-52) mount: enabling WRITEAHEAD journal mode: checkpoint is not enabled
   -17> 2017-10-14 14:31:37.145280 7f088dd89800  0 <cls> cls/cephfs/cls_cephfs.cc:202: loading cephfs_size_scan
   -16> 2017-10-14 14:31:37.145420 7f088dd89800  0 <cls> cls/hello/cls_hello.cc:305: loading cls_hello
   -15> 2017-10-14 14:31:37.150727 7f088dd89800  0 osd.52 69552 crush map has features 2303210029056, adjusting msgr requires for clients
   -14> 2017-10-14 14:31:37.150735 7f088dd89800  0 osd.52 69552 crush map has features 2578087936000 was 8705, adjusting msgr requires for mons
   -13> 2017-10-14 14:31:37.150743 7f088dd89800  0 osd.52 69552 crush map has features 2578087936000, adjusting msgr requires for osds
   -12> 2017-10-14 14:31:37.170925 7f088dd89800  0 osd.52 69552 load_pgs
   -11> 2017-10-14 14:31:38.922044 7f088dd89800  0 osd.52 69552 load_pgs opened 199 pgs
   -10> 2017-10-14 14:31:38.922092 7f088dd89800  0 osd.52 69552 using 0 op queue with priority op cut off at 64.
    -9> 2017-10-14 14:31:38.922873 7f088dd89800 -1 osd.52 69552 log_to_monitors {default=true}
    -8> 2017-10-14 14:31:38.927395 7f088dd89800  0 osd.52 69552 done with init, starting boot process
    -7> 2017-10-14 14:31:40.176119 7f085369d700  0 -- 192.168.44.222:6824/1212200 >> 192.168.44.221:6804/970395 pipe(0x563b2d001400 sd=160 :6824 s=0 pgs=0 cs=0 l=0 c=0x563b2cf80480).accept connect_seq 0 vs existing 0 state connecting
    -6> 2017-10-14 14:31:40.176862 7f0852a91700  0 -- 192.168.44.222:6824/1212200 >> :/0 pipe(0x563b2d20f400 sd=170 :6824 s=0 pgs=0 cs=0 l=0 c=0x563b2cf81c80).accept failed to getpeername (107) Transport endpoint is not connected
    -5> 2017-10-14 14:31:40.177534 7f085248b700  0 -- 192.168.44.222:6824/1212200 >> 192.168.44.223:6800/282955 pipe(0x563b2d210800 sd=173 :6824 s=0 pgs=0 cs=0 l=0 c=0x563b2cf81e00).accept connect_seq 0 vs existing 0 state connecting
    -4> 2017-10-14 14:31:43.024772 7f084ea51700  0 -- 192.168.44.222:6824/1212200 >> 192.168.44.220:6824/4112 pipe(0x563b2d8b5400 sd=198 :6824 s=0 pgs=0 cs=0 l=0 c=0x563b2d335780).accept connect_seq 0 vs existing 0 state connecting
    -3> 2017-10-14 14:31:43.024836 7f084eb52700  0 -- 192.168.44.222:6824/1212200 >> 192.168.44.222:6826/980397 pipe(0x563b2d8b4000 sd=31 :6824 s=0 pgs=0 cs=0 l=0 c=0x563b2d335600).accept connect_seq 0 vs existing 0 state connecting
    -2> 2017-10-14 14:31:43.024905 7f084e84f700  0 -- 192.168.44.222:6824/1212200 >> 192.168.44.220:6808/3472 pipe(0x563b2d8b6800 sd=199 :6824 s=0 pgs=0 cs=0 l=0 c=0x563b2d335900).accept connect_seq 0 vs existing 0 state connecting
    -1> 2017-10-14 14:31:43.025002 7f084e74e700  0 -- 192.168.44.222:6824/1212200 >> 192.168.44.220:6818/3874 pipe(0x563b2d8bd400 sd=201 :6824 s=0 pgs=0 cs=0 l=0 c=0x563b2d335a80).accept connect_seq 0 vs existing 0 state wait
     0> 2017-10-14 14:31:43.036252 7f0869bce700 -1 osd/ReplicatedPG.cc: In function 'void ReplicatedPG::hit_set_trim(ReplicatedPG::OpContextUPtr&, unsigned int)' thread 7f0869bce700 time 2017-10-14 14:31:43.033384
osd/ReplicatedPG.cc: 11782: FAILED assert(obc)

 ceph version 10.2.10 (5dc1e4c05cb68dbf62ae6fce3f0700e4654fdbbe)
 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x85) [0x563b06b429e5]
 2: (ReplicatedPG::hit_set_trim(std::unique_ptr<ReplicatedPG::OpContext, std::default_delete<ReplicatedPG::OpContext> >&, unsigned int)+0x6dd) [0x563b0661c52d]
 3: (ReplicatedPG::hit_set_persist()+0xd7c) [0x563b0661f1bc]
 4: (ReplicatedPG::do_op(std::shared_ptr<OpRequest>&)+0x1a92) [0x563b0663dbe2]
 5: (ReplicatedPG::do_request(std::shared_ptr<OpRequest>&, ThreadPool::TPHandle&)+0x747) [0x563b065fa8a7]
 6: (OSD::dequeue_op(boost::intrusive_ptr<PG>, std::shared_ptr<OpRequest>, ThreadPool::TPHandle&)+0x41d) [0x563b064adbad]
 7: (PGQueueable::RunVis::operator()(std::shared_ptr<OpRequest>&)+0x6d) [0x563b064addfd]
 8: (OSD::ShardedOpWQ::_process(unsigned int, ceph::heartbeat_handle_d*)+0x77b) [0x563b064b17db]
 9: (ShardedThreadPool::shardedthreadpool_worker(unsigned int)+0x887) [0x563b06b32987]
 10: (ShardedThreadPool::WorkThreadSharded::entry()+0x10) [0x563b06b348f0]
 11: (()+0x7e25) [0x7f088c4cfe25]
 12: (clone()+0x6d) [0x7f088ab5934d]
 NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.

--- logging levels ---
   0/ 5 none
   0/ 0 lockdep
   0/ 0 context
   0/ 0 crush
   0/ 0 mds
   0/ 0 mds_balancer
   0/ 0 mds_locker
   0/ 0 mds_log
   0/ 0 mds_log_expire
   0/ 0 mds_migrator
   0/ 0 buffer
   0/ 0 timer
   0/ 1 filer
   0/ 1 striper
   0/ 1 objecter
   0/ 5 rados
   0/ 5 rbd
   0/ 5 rbd_mirror
   0/ 5 rbd_replay
   0/ 0 journaler
   0/ 5 objectcacher
   0/ 5 client
   0/ 0 osd
   0/ 0 optracker
   0/ 0 objclass
   0/ 0 filestore
   0/ 0 journal
   0/ 0 ms
   0/ 0 mon
   0/ 0 monc
   0/ 0 paxos
   0/ 0 tp
   0/ 0 auth
   0/ 0 crypto
   0/ 0 finisher
   0/ 0 heartbeatmap
   0/ 0 perfcounter
   0/ 0 rgw
   0/ 0 civetweb
   0/ 0 javaclient
   0/ 0 asok
   0/ 0 throttle
   0/ 0 refs
   0/ 0 xio
   0/ 0 compressor
   0/ 0 newstore
   0/ 0 bluestore
   0/ 0 bluefs
   0/ 0 bdev
   0/ 0 kstore
   0/ 0 rocksdb
   0/ 0 leveldb
   0/ 0 kinetic
   0/ 0 fuse
  -2/-2 (syslog threshold)
  -1/-1 (stderr threshold)
  max_recent     10000
  max_new         1000
  log_file /var/log/ceph/ceph-osd.52.log
--- end dump of recent events ---
2017-10-14 14:31:43.039313 7f0869bce700 -1 *** Caught signal (Aborted) **
 in thread 7f0869bce700 thread_name:tp_osd_tp

 ceph version 10.2.10 (5dc1e4c05cb68dbf62ae6fce3f0700e4654fdbbe)
 1: (()+0x92c18a) [0x563b06a4518a]
 2: (()+0xf5e0) [0x7f088c4d75e0]
 3: (gsignal()+0x37) [0x7f088aa961f7]
 4: (abort()+0x148) [0x7f088aa978e8]
 5: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x267) [0x563b06b42bc7]
 6: (ReplicatedPG::hit_set_trim(std::unique_ptr<ReplicatedPG::OpContext, std::default_delete<ReplicatedPG::OpContext> >&, unsigned int)+0x6dd) [0x563b0661c52d]
 7: (ReplicatedPG::hit_set_persist()+0xd7c) [0x563b0661f1bc]
 8: (ReplicatedPG::do_op(std::shared_ptr<OpRequest>&)+0x1a92) [0x563b0663dbe2]
 9: (ReplicatedPG::do_request(std::shared_ptr<OpRequest>&, ThreadPool::TPHandle&)+0x747) [0x563b065fa8a7]
 10: (OSD::dequeue_op(boost::intrusive_ptr<PG>, std::shared_ptr<OpRequest>, ThreadPool::TPHandle&)+0x41d) [0x563b064adbad]
 11: (PGQueueable::RunVis::operator()(std::shared_ptr<OpRequest>&)+0x6d) [0x563b064addfd]
 12: (OSD::ShardedOpWQ::_process(unsigned int, ceph::heartbeat_handle_d*)+0x77b) [0x563b064b17db]
 13: (ShardedThreadPool::shardedthreadpool_worker(unsigned int)+0x887) [0x563b06b32987]
 14: (ShardedThreadPool::WorkThreadSharded::entry()+0x10) [0x563b06b348f0]
 15: (()+0x7e25) [0x7f088c4cfe25]
 16: (clone()+0x6d) [0x7f088ab5934d]
 NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.

--- begin dump of recent events ---
     0> 2017-10-14 14:31:43.039313 7f0869bce700 -1 *** Caught signal (Aborted) **
 in thread 7f0869bce700 thread_name:tp_osd_tp

 ceph version 10.2.10 (5dc1e4c05cb68dbf62ae6fce3f0700e4654fdbbe)
 1: (()+0x92c18a) [0x563b06a4518a]
 2: (()+0xf5e0) [0x7f088c4d75e0]
 3: (gsignal()+0x37) [0x7f088aa961f7]
 4: (abort()+0x148) [0x7f088aa978e8]
 5: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x267) [0x563b06b42bc7]
 6: (ReplicatedPG::hit_set_trim(std::unique_ptr<ReplicatedPG::OpContext, std::default_delete<ReplicatedPG::OpContext> >&, unsigned int)+0x6dd) [0x563b0661c52d]
 7: (ReplicatedPG::hit_set_persist()+0xd7c) [0x563b0661f1bc]
 8: (ReplicatedPG::do_op(std::shared_ptr<OpRequest>&)+0x1a92) [0x563b0663dbe2]
 9: (ReplicatedPG::do_request(std::shared_ptr<OpRequest>&, ThreadPool::TPHandle&)+0x747) [0x563b065fa8a7]
 10: (OSD::dequeue_op(boost::intrusive_ptr<PG>, std::shared_ptr<OpRequest>, ThreadPool::TPHandle&)+0x41d) [0x563b064adbad]
 11: (PGQueueable::RunVis::operator()(std::shared_ptr<OpRequest>&)+0x6d) [0x563b064addfd]
 12: (OSD::ShardedOpWQ::_process(unsigned int, ceph::heartbeat_handle_d*)+0x77b) [0x563b064b17db]
 13: (ShardedThreadPool::shardedthreadpool_worker(unsigned int)+0x887) [0x563b06b32987]
 14: (ShardedThreadPool::WorkThreadSharded::entry()+0x10) [0x563b06b348f0]
 15: (()+0x7e25) [0x7f088c4cfe25]
 16: (clone()+0x6d) [0x7f088ab5934d]
 NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.

--- logging levels ---
   0/ 5 none
   0/ 0 lockdep
   0/ 0 context
   0/ 0 crush
   0/ 0 mds
   0/ 0 mds_balancer
   0/ 0 mds_locker
   0/ 0 mds_log
   0/ 0 mds_log_expire
   0/ 0 mds_migrator
   0/ 0 buffer
   0/ 0 timer
   0/ 1 filer
   0/ 1 striper
   0/ 1 objecter
   0/ 5 rados
   0/ 5 rbd
   0/ 5 rbd_mirror
   0/ 5 rbd_replay
   0/ 0 journaler
   0/ 5 objectcacher
   0/ 5 client
   0/ 0 osd
   0/ 0 optracker
   0/ 0 objclass
   0/ 0 filestore
   0/ 0 journal
   0/ 0 ms
   0/ 0 mon
   0/ 0 monc
   0/ 0 paxos
   0/ 0 tp
   0/ 0 auth
   0/ 0 crypto
   0/ 0 finisher
   0/ 0 heartbeatmap
   0/ 0 perfcounter
   0/ 0 rgw
   0/ 0 civetweb
   0/ 0 javaclient
   0/ 0 asok
   0/ 0 throttle
   0/ 0 refs
   0/ 0 xio
   0/ 0 compressor
   0/ 0 newstore
   0/ 0 bluestore
   0/ 0 bluefs
   0/ 0 bdev
   0/ 0 kstore
   0/ 0 rocksdb
   0/ 0 leveldb
   0/ 0 kinetic
   0/ 0 fuse
  -2/-2 (syslog threshold)
  -1/-1 (stderr threshold)
  max_recent     10000
  max_new         1000
  log_file /var/log/ceph/ceph-osd.52.log
--- end dump of recent events ---
2017-10-14 14:32:03.244377 7ff96df39800  0 set uid:gid to 167:167 (ceph:ceph)
2017-10-14 14:32:03.244391 7ff96df39800  0 ceph version 10.2.10 (5dc1e4c05cb68dbf62ae6fce3f0700e4654fdbbe), process ceph-osd, pid 1213516
2017-10-14 14:32:03.246067 7ff96df39800  0 pidfile_write: ignore empty --pid-file
2017-10-14 14:32:03.277163 7ff96df39800  0 filestore(/var/lib/ceph/osd/ceph-52) backend xfs (magic 0x58465342)
2017-10-14 14:32:03.278613 7ff96df39800  0 genericfilestorebackend(/var/lib/ceph/osd/ceph-52) detect_features: FIEMAP ioctl is disabled via 'filestore fiemap' config option
2017-10-14 14:32:03.278619 7ff96df39800  0 genericfilestorebackend(/var/lib/ceph/osd/ceph-52) detect_features: SEEK_DATA/SEEK_HOLE is disabled via 'filestore seek data hole' config option
2017-10-14 14:32:03.278633 7ff96df39800  0 genericfilestorebackend(/var/lib/ceph/osd/ceph-52) detect_features: splice is supported
2017-10-14 14:32:03.279840 7ff96df39800  0 genericfilestorebackend(/var/lib/ceph/osd/ceph-52) detect_features: syncfs(2) syscall fully supported (by glibc and kernel)
2017-10-14 14:32:03.279880 7ff96df39800  0 xfsfilestorebackend(/var/lib/ceph/osd/ceph-52) detect_feature: extsize is disabled by conf
2017-10-14 14:32:03.327307 7ff96df39800  0 filestore(/var/lib/ceph/osd/ceph-52) mount: enabling WRITEAHEAD journal mode: checkpoint is not enabled
2017-10-14 14:32:03.334481 7ff96df39800  0 <cls> cls/cephfs/cls_cephfs.cc:202: loading cephfs_size_scan
2017-10-14 14:32:03.334617 7ff96df39800  0 <cls> cls/hello/cls_hello.cc:305: loading cls_hello
2017-10-14 14:32:03.339971 7ff96df39800  0 osd.52 69556 crush map has features 2303210029056, adjusting msgr requires for clients
2017-10-14 14:32:03.339980 7ff96df39800  0 osd.52 69556 crush map has features 2578087936000 was 8705, adjusting msgr requires for mons
2017-10-14 14:32:03.339988 7ff96df39800  0 osd.52 69556 crush map has features 2578087936000, adjusting msgr requires for osds
2017-10-14 14:32:03.360124 7ff96df39800  0 osd.52 69556 load_pgs
2017-10-14 14:32:05.079413 7ff96df39800  0 osd.52 69556 load_pgs opened 199 pgs
2017-10-14 14:32:05.079455 7ff96df39800  0 osd.52 69556 using 0 op queue with priority op cut off at 64.
2017-10-14 14:32:05.080164 7ff96df39800 -1 osd.52 69556 log_to_monitors {default=true}
2017-10-14 14:32:05.085614 7ff96df39800  0 osd.52 69556 done with init, starting boot process
2017-10-14 14:32:06.570914 7ff9333ee700  0 -- 192.168.44.222:6824/1213516 >> 192.168.44.221:6804/970395 pipe(0x55b9eac98000 sd=163 :6824 s=0 pgs=0 cs=0 l=0 c=0x55b9eaa1ca80).accept connect_seq 0 vs existing 0 state connecting
2017-10-14 14:32:06.572421 7ff9329e4700  0 -- 192.168.44.222:6824/1213516 >> 192.168.44.223:6800/282955 pipe(0x55b9eace8000 sd=171 :6824 s=0 pgs=0 cs=0 l=0 c=0x55b9eaa1d800).accept connect_seq 0 vs existing 0 state wait
2017-10-14 14:32:08.953606 7ff949d23700 -1 osd/ReplicatedPG.cc: In function 'void ReplicatedPG::hit_set_trim(ReplicatedPG::OpContextUPtr&, unsigned int)' thread 7ff949d23700 time 2017-10-14 14:32:08.950740
osd/ReplicatedPG.cc: 11782: FAILED assert(obc)

 ceph version 10.2.10 (5dc1e4c05cb68dbf62ae6fce3f0700e4654fdbbe)
 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x85) [0x55b9c36dd9e5]
 2: (ReplicatedPG::hit_set_trim(std::unique_ptr<ReplicatedPG::OpContext, std::default_delete<ReplicatedPG::OpContext> >&, unsigned int)+0x6dd) [0x55b9c31b752d]
 3: (ReplicatedPG::hit_set_persist()+0xd7c) [0x55b9c31ba1bc]
 4: (ReplicatedPG::do_op(std::shared_ptr<OpRequest>&)+0x1a92) [0x55b9c31d8be2]
 5: (ReplicatedPG::do_request(std::shared_ptr<OpRequest>&, ThreadPool::TPHandle&)+0x747) [0x55b9c31958a7]
 6: (OSD::dequeue_op(boost::intrusive_ptr<PG>, std::shared_ptr<OpRequest>, ThreadPool::TPHandle&)+0x41d) [0x55b9c3048bad]
 7: (PGQueueable::RunVis::operator()(std::shared_ptr<OpRequest>&)+0x6d) [0x55b9c3048dfd]
 8: (OSD::ShardedOpWQ::_process(unsigned int, ceph::heartbeat_handle_d*)+0x77b) [0x55b9c304c7db]
 9: (ShardedThreadPool::shardedthreadpool_worker(unsigned int)+0x887) [0x55b9c36cd987]
 10: (ShardedThreadPool::WorkThreadSharded::entry()+0x10) [0x55b9c36cf8f0]
 11: (()+0x7e25) [0x7ff96c67fe25]
 12: (clone()+0x6d) [0x7ff96ad0934d]
 NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.

--- begin dump of recent events ---
   -36> 2017-10-14 14:32:03.241482 7ff96df39800  5 asok(0x55b9cf108140) register_command perfcounters_dump hook 0x55b9cf0e0030
   -35> 2017-10-14 14:32:03.241501 7ff96df39800  5 asok(0x55b9cf108140) register_command 1 hook 0x55b9cf0e0030
   -34> 2017-10-14 14:32:03.241505 7ff96df39800  5 asok(0x55b9cf108140) register_command perf dump hook 0x55b9cf0e0030
   -33> 2017-10-14 14:32:03.241510 7ff96df39800  5 asok(0x55b9cf108140) register_command perfcounters_schema hook 0x55b9cf0e0030
   -32> 2017-10-14 14:32:03.241512 7ff96df39800  5 asok(0x55b9cf108140) register_command 2 hook 0x55b9cf0e0030
   -31> 2017-10-14 14:32:03.241516 7ff96df39800  5 asok(0x55b9cf108140) register_command perf schema hook 0x55b9cf0e0030
   -30> 2017-10-14 14:32:03.241518 7ff96df39800  5 asok(0x55b9cf108140) register_command perf reset hook 0x55b9cf0e0030
   -29> 2017-10-14 14:32:03.241522 7ff96df39800  5 asok(0x55b9cf108140) register_command config show hook 0x55b9cf0e0030
   -28> 2017-10-14 14:32:03.241525 7ff96df39800  5 asok(0x55b9cf108140) register_command config set hook 0x55b9cf0e0030
   -27> 2017-10-14 14:32:03.241528 7ff96df39800  5 asok(0x55b9cf108140) register_command config get hook 0x55b9cf0e0030
   -26> 2017-10-14 14:32:03.241531 7ff96df39800  5 asok(0x55b9cf108140) register_command config diff hook 0x55b9cf0e0030
   -25> 2017-10-14 14:32:03.241533 7ff96df39800  5 asok(0x55b9cf108140) register_command log flush hook 0x55b9cf0e0030
   -24> 2017-10-14 14:32:03.241536 7ff96df39800  5 asok(0x55b9cf108140) register_command log dump hook 0x55b9cf0e0030
   -23> 2017-10-14 14:32:03.241539 7ff96df39800  5 asok(0x55b9cf108140) register_command log reopen hook 0x55b9cf0e0030
   -22> 2017-10-14 14:32:03.244377 7ff96df39800  0 set uid:gid to 167:167 (ceph:ceph)
   -21> 2017-10-14 14:32:03.244391 7ff96df39800  0 ceph version 10.2.10 (5dc1e4c05cb68dbf62ae6fce3f0700e4654fdbbe), process ceph-osd, pid 1213516
   -20> 2017-10-14 14:32:03.246067 7ff96df39800  0 pidfile_write: ignore empty --pid-file
   -19> 2017-10-14 14:32:03.277163 7ff96df39800  0 filestore(/var/lib/ceph/osd/ceph-52) backend xfs (magic 0x58465342)
   -18> 2017-10-14 14:32:03.278613 7ff96df39800  0 genericfilestorebackend(/var/lib/ceph/osd/ceph-52) detect_features: FIEMAP ioctl is disabled via 'filestore fiemap' config option
   -17> 2017-10-14 14:32:03.278619 7ff96df39800  0 genericfilestorebackend(/var/lib/ceph/osd/ceph-52) detect_features: SEEK_DATA/SEEK_HOLE is disabled via 'filestore seek data hole' config option
   -16> 2017-10-14 14:32:03.278633 7ff96df39800  0 genericfilestorebackend(/var/lib/ceph/osd/ceph-52) detect_features: splice is supported
   -15> 2017-10-14 14:32:03.279840 7ff96df39800  0 genericfilestorebackend(/var/lib/ceph/osd/ceph-52) detect_features: syncfs(2) syscall fully supported (by glibc and kernel)
   -14> 2017-10-14 14:32:03.279880 7ff96df39800  0 xfsfilestorebackend(/var/lib/ceph/osd/ceph-52) detect_feature: extsize is disabled by conf
   -13> 2017-10-14 14:32:03.327307 7ff96df39800  0 filestore(/var/lib/ceph/osd/ceph-52) mount: enabling WRITEAHEAD journal mode: checkpoint is not enabled
   -12> 2017-10-14 14:32:03.334481 7ff96df39800  0 <cls> cls/cephfs/cls_cephfs.cc:202: loading cephfs_size_scan
   -11> 2017-10-14 14:32:03.334617 7ff96df39800  0 <cls> cls/hello/cls_hello.cc:305: loading cls_hello
   -10> 2017-10-14 14:32:03.339971 7ff96df39800  0 osd.52 69556 crush map has features 2303210029056, adjusting msgr requires for clients
    -9> 2017-10-14 14:32:03.339980 7ff96df39800  0 osd.52 69556 crush map has features 2578087936000 was 8705, adjusting msgr requires for mons
    -8> 2017-10-14 14:32:03.339988 7ff96df39800  0 osd.52 69556 crush map has features 2578087936000, adjusting msgr requires for osds
    -7> 2017-10-14 14:32:03.360124 7ff96df39800  0 osd.52 69556 load_pgs
    -6> 2017-10-14 14:32:05.079413 7ff96df39800  0 osd.52 69556 load_pgs opened 199 pgs
    -5> 2017-10-14 14:32:05.079455 7ff96df39800  0 osd.52 69556 using 0 op queue with priority op cut off at 64.
    -4> 2017-10-14 14:32:05.080164 7ff96df39800 -1 osd.52 69556 log_to_monitors {default=true}
    -3> 2017-10-14 14:32:05.085614 7ff96df39800  0 osd.52 69556 done with init, starting boot process
    -2> 2017-10-14 14:32:06.570914 7ff9333ee700  0 -- 192.168.44.222:6824/1213516 >> 192.168.44.221:6804/970395 pipe(0x55b9eac98000 sd=163 :6824 s=0 pgs=0 cs=0 l=0 c=0x55b9eaa1ca80).accept connect_seq 0 vs existing 0 state connecting
    -1> 2017-10-14 14:32:06.572421 7ff9329e4700  0 -- 192.168.44.222:6824/1213516 >> 192.168.44.223:6800/282955 pipe(0x55b9eace8000 sd=171 :6824 s=0 pgs=0 cs=0 l=0 c=0x55b9eaa1d800).accept connect_seq 0 vs existing 0 state wait
     0> 2017-10-14 14:32:08.953606 7ff949d23700 -1 osd/ReplicatedPG.cc: In function 'void ReplicatedPG::hit_set_trim(ReplicatedPG::OpContextUPtr&, unsigned int)' thread 7ff949d23700 time 2017-10-14 14:32:08.950740
osd/ReplicatedPG.cc: 11782: FAILED assert(obc)

 ceph version 10.2.10 (5dc1e4c05cb68dbf62ae6fce3f0700e4654fdbbe)
 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x85) [0x55b9c36dd9e5]
 2: (ReplicatedPG::hit_set_trim(std::unique_ptr<ReplicatedPG::OpContext, std::default_delete<ReplicatedPG::OpContext> >&, unsigned int)+0x6dd) [0x55b9c31b752d]
 3: (ReplicatedPG::hit_set_persist()+0xd7c) [0x55b9c31ba1bc]
 4: (ReplicatedPG::do_op(std::shared_ptr<OpRequest>&)+0x1a92) [0x55b9c31d8be2]
 5: (ReplicatedPG::do_request(std::shared_ptr<OpRequest>&, ThreadPool::TPHandle&)+0x747) [0x55b9c31958a7]
 6: (OSD::dequeue_op(boost::intrusive_ptr<PG>, std::shared_ptr<OpRequest>, ThreadPool::TPHandle&)+0x41d) [0x55b9c3048bad]
 7: (PGQueueable::RunVis::operator()(std::shared_ptr<OpRequest>&)+0x6d) [0x55b9c3048dfd]
 8: (OSD::ShardedOpWQ::_process(unsigned int, ceph::heartbeat_handle_d*)+0x77b) [0x55b9c304c7db]
 9: (ShardedThreadPool::shardedthreadpool_worker(unsigned int)+0x887) [0x55b9c36cd987]
 10: (ShardedThreadPool::WorkThreadSharded::entry()+0x10) [0x55b9c36cf8f0]
 11: (()+0x7e25) [0x7ff96c67fe25]
 12: (clone()+0x6d) [0x7ff96ad0934d]
 NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.

--- logging levels ---
   0/ 5 none
   0/ 0 lockdep
   0/ 0 context
   0/ 0 crush
   0/ 0 mds
   0/ 0 mds_balancer
   0/ 0 mds_locker
   0/ 0 mds_log
   0/ 0 mds_log_expire
   0/ 0 mds_migrator
   0/ 0 buffer
   0/ 0 timer
   0/ 1 filer
   0/ 1 striper
   0/ 1 objecter
   0/ 5 rados
   0/ 5 rbd
   0/ 5 rbd_mirror
   0/ 5 rbd_replay
   0/ 0 journaler
   0/ 5 objectcacher
   0/ 5 client
   0/ 0 osd
   0/ 0 optracker
   0/ 0 objclass
   0/ 0 filestore
   0/ 0 journal
   0/ 0 ms
   0/ 0 mon
   0/ 0 monc
   0/ 0 paxos
   0/ 0 tp
   0/ 0 auth
   0/ 0 crypto
   0/ 0 finisher
   0/ 0 heartbeatmap
   0/ 0 perfcounter
   0/ 0 rgw
   0/ 0 civetweb
   0/ 0 javaclient
   0/ 0 asok
   0/ 0 throttle
   0/ 0 refs
   0/ 0 xio
   0/ 0 compressor
   0/ 0 newstore
   0/ 0 bluestore
   0/ 0 bluefs
   0/ 0 bdev
   0/ 0 kstore
   0/ 0 rocksdb
   0/ 0 leveldb
   0/ 0 kinetic
   0/ 0 fuse
  -2/-2 (syslog threshold)
  -1/-1 (stderr threshold)
  max_recent     10000
  max_new         1000
  log_file /var/log/ceph/ceph-osd.52.log
--- end dump of recent events ---
2017-10-14 14:32:08.956827 7ff949d23700 -1 *** Caught signal (Aborted) **
 in thread 7ff949d23700 thread_name:tp_osd_tp

 ceph version 10.2.10 (5dc1e4c05cb68dbf62ae6fce3f0700e4654fdbbe)
 1: (()+0x92c18a) [0x55b9c35e018a]
 2: (()+0xf5e0) [0x7ff96c6875e0]
 3: (gsignal()+0x37) [0x7ff96ac461f7]
 4: (abort()+0x148) [0x7ff96ac478e8]
 5: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x267) [0x55b9c36ddbc7]
 6: (ReplicatedPG::hit_set_trim(std::unique_ptr<ReplicatedPG::OpContext, std::default_delete<ReplicatedPG::OpContext> >&, unsigned int)+0x6dd) [0x55b9c31b752d]
 7: (ReplicatedPG::hit_set_persist()+0xd7c) [0x55b9c31ba1bc]
 8: (ReplicatedPG::do_op(std::shared_ptr<OpRequest>&)+0x1a92) [0x55b9c31d8be2]
 9: (ReplicatedPG::do_request(std::shared_ptr<OpRequest>&, ThreadPool::TPHandle&)+0x747) [0x55b9c31958a7]
 10: (OSD::dequeue_op(boost::intrusive_ptr<PG>, std::shared_ptr<OpRequest>, ThreadPool::TPHandle&)+0x41d) [0x55b9c3048bad]
 11: (PGQueueable::RunVis::operator()(std::shared_ptr<OpRequest>&)+0x6d) [0x55b9c3048dfd]
 12: (OSD::ShardedOpWQ::_process(unsigned int, ceph::heartbeat_handle_d*)+0x77b) [0x55b9c304c7db]
 13: (ShardedThreadPool::shardedthreadpool_worker(unsigned int)+0x887) [0x55b9c36cd987]
 14: (ShardedThreadPool::WorkThreadSharded::entry()+0x10) [0x55b9c36cf8f0]
 15: (()+0x7e25) [0x7ff96c67fe25]
 16: (clone()+0x6d) [0x7ff96ad0934d]
 NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.

--- begin dump of recent events ---
     0> 2017-10-14 14:32:08.956827 7ff949d23700 -1 *** Caught signal (Aborted) **
 in thread 7ff949d23700 thread_name:tp_osd_tp

 ceph version 10.2.10 (5dc1e4c05cb68dbf62ae6fce3f0700e4654fdbbe)
 1: (()+0x92c18a) [0x55b9c35e018a]
 2: (()+0xf5e0) [0x7ff96c6875e0]
 3: (gsignal()+0x37) [0x7ff96ac461f7]
 4: (abort()+0x148) [0x7ff96ac478e8]
 5: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x267) [0x55b9c36ddbc7]
 6: (ReplicatedPG::hit_set_trim(std::unique_ptr<ReplicatedPG::OpContext, std::default_delete<ReplicatedPG::OpContext> >&, unsigned int)+0x6dd) [0x55b9c31b752d]
 7: (ReplicatedPG::hit_set_persist()+0xd7c) [0x55b9c31ba1bc]
 8: (ReplicatedPG::do_op(std::shared_ptr<OpRequest>&)+0x1a92) [0x55b9c31d8be2]
 9: (ReplicatedPG::do_request(std::shared_ptr<OpRequest>&, ThreadPool::TPHandle&)+0x747) [0x55b9c31958a7]
 10: (OSD::dequeue_op(boost::intrusive_ptr<PG>, std::shared_ptr<OpRequest>, ThreadPool::TPHandle&)+0x41d) [0x55b9c3048bad]
 11: (PGQueueable::RunVis::operator()(std::shared_ptr<OpRequest>&)+0x6d) [0x55b9c3048dfd]
 12: (OSD::ShardedOpWQ::_process(unsigned int, ceph::heartbeat_handle_d*)+0x77b) [0x55b9c304c7db]
 13: (ShardedThreadPool::shardedthreadpool_worker(unsigned int)+0x887) [0x55b9c36cd987]
 14: (ShardedThreadPool::WorkThreadSharded::entry()+0x10) [0x55b9c36cf8f0]
 15: (()+0x7e25) [0x7ff96c67fe25]
 16: (clone()+0x6d) [0x7ff96ad0934d]
 NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.

--- logging levels ---
   0/ 5 none
   0/ 0 lockdep
   0/ 0 context
   0/ 0 crush
   0/ 0 mds
   0/ 0 mds_balancer
   0/ 0 mds_locker
   0/ 0 mds_log
   0/ 0 mds_log_expire
   0/ 0 mds_migrator
   0/ 0 buffer
   0/ 0 timer
   0/ 1 filer
   0/ 1 striper
   0/ 1 objecter
   0/ 5 rados
   0/ 5 rbd
   0/ 5 rbd_mirror
   0/ 5 rbd_replay
   0/ 0 journaler
   0/ 5 objectcacher
   0/ 5 client
   0/ 0 osd
   0/ 0 optracker
   0/ 0 objclass
   0/ 0 filestore
   0/ 0 journal
   0/ 0 ms
   0/ 0 mon
   0/ 0 monc
   0/ 0 paxos
   0/ 0 tp
   0/ 0 auth
   0/ 0 crypto
   0/ 0 finisher
   0/ 0 heartbeatmap
   0/ 0 perfcounter
   0/ 0 rgw
   0/ 0 civetweb
   0/ 0 javaclient
   0/ 0 asok
   0/ 0 throttle
   0/ 0 refs
   0/ 0 xio
   0/ 0 compressor
   0/ 0 newstore
   0/ 0 bluestore
   0/ 0 bluefs
   0/ 0 bdev
   0/ 0 kstore
   0/ 0 rocksdb
   0/ 0 leveldb
   0/ 0 kinetic
   0/ 0 fuse
  -2/-2 (syslog threshold)
  -1/-1 (stderr threshold)
  max_recent     10000
  max_new         1000
  log_file /var/log/ceph/ceph-osd.52.log
--- end dump of recent events ---


--
Performance Conseil Informatique
Pascal Pucci
Consultant Infrastructure
pascal.pucci@xxxxxxxxxxxxxxx
Mobile : 06 51 47 84 98
Bureau : 02 85 52 41 81
http://www.performance-conseil-informatique.net
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux