ceph-osd start failed because of PG::peek_map_epoch() assertion

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Cephers,


One of our cluster’s osd can not start because of pg in the osd can not load infover_key from rocksdb,  log as the following.
Could someone talk something about this,  thank you guys!
 
Log:

2018-06-26 15:09:16.036832 b66c6000  0 osd.41 3712 load_pgs
2056114 2018-06-26 15:09:16.036921 b66c6000 10 osd.41 3712 load_pgs ignoring unrecognized meta
2056115 2018-06-26 15:09:16.037002 b66c6000 15 bluestore(/var/lib/ceph/osd/ceph-41) omap_get_values 4.b_head oid #4:d0000000::::head#
2056116 2018-06-26 15:09:16.037023 b66c6000 30 bluestore.OnodeSpace(0xa0a4aec in 0x5eccbd0) lookup
2056117 2018-06-26 15:09:16.037030 b66c6000 30 bluestore.OnodeSpace(0xa0a4aec in 0x5eccbd0) lookup #4:d0000000::::head# miss.                          // not found in cache
2056118 2018-06-26 15:09:16.037045 b66c6000 20 bluestore(/var/lib/ceph/osd/ceph-41).collection(4.b_head 0xa0a4a00) get_onode oid #4:d0000000::::head# key 0x7f        8000000000000004d000000021213dfffffffffffffffeffffffffffffffff’o’.       // found in db
2056119 2018-06-26 15:09:16.038876 aa44c8e0 10 trim shard target 5734 k meta/data ratios 0.16875 + 0.05 (967 k + 286 k),  current 59662  (30990  + 28672 )
2056120 2018-06-26 15:09:16.038933 aa44c8e0 10 trim shard target 5734 k meta/data ratios 0.16875 + 0.05 (967 k + 286 k),  current 0  (0  + 0 )
2056121 2018-06-26 15:09:16.038948 aa44c8e0 10 trim shard target 5734 k meta/data ratios 0.16875 + 0.05 (967 k + 286 k),  current 0  (0  + 0 )
2056122 2018-06-26 15:09:16.038959 aa44c8e0 10 trim shard target 5734 k meta/data ratios 0.16875 + 0.05 (967 k + 286 k),  current 0  (0  + 0 )
2056123 2018-06-26 15:09:16.038969 aa44c8e0 10 trim shard target 5734 k meta/data ratios 0.16875 + 0.05 (967 k + 286 k),  current 0  (0  + 0 )
2056124 2018-06-26 15:09:16.046036 b66c6000 20 bluestore(/var/lib/ceph/osd/ceph-41).collection(4.b_head 0xa0a4a00)  r 0 v.len 29
2056125 2018-06-26 15:09:16.046095 b66c6000 30 bluestore.OnodeSpace(0xa0a4aec in 0x5eccbd0) add #4:d0000000::::head# 0x5eecf00
2056126 2018-06-26 15:09:16.046118 b66c6000 20 bluestore.onode(0x5eecf00).flush flush done.                                                                                         // flush into cache
2056127 2018-06-26 15:09:16.046176 b66c6000 10 bluestore(/var/lib/ceph/osd/ceph-41) omap_get_values 4.b_head oid #4:d0000000::::head# = 0
2056128 2018-06-26 15:09:16.046199 b66c6000 10 osd.41 3712 pgid 4.b coll 4.b_head
2056129 2018-06-26 15:09:16.046217 b66c6000 15 bluestore(/var/lib/ceph/osd/ceph-41) omap_get_values 4.b_head oid #4:d0000000::::head#
2056130 2018-06-26 15:09:16.046225 b66c6000 30 bluestore.OnodeSpace(0xa0a4aec in 0x5eccbd0) lookup
2056131 2018-06-26 15:09:16.046231 b66c6000 30 bluestore.OnodeSpace(0xa0a4aec in 0x5eccbd0) lookup #4:d0000000::::head# hit 0x5eecf00            // cache hit
2056132 2018-06-26 15:09:16.046238 b66c6000 20 bluestore.onode(0x5eecf00).flush flush done
2056133 2018-06-26 15:09:16.046629 b66c6000 30 bluestore(/var/lib/ceph/osd/ceph-41) omap_get_values  got 0x00000000000006ea'._epoch' -> _epoch    //Only got ‘_epoch', but not ‘_infover’, so the assertion triggered!
2056134 2018-06-26 15:09:16.046683 b66c6000 10 bluestore(/var/lib/ceph/osd/ceph-41) omap_get_values 4.b_head oid #4:d0000000::::head# = 0
2056135 2018-06-26 15:09:16.049543 b66c6000 -1 /home/ceph01/projects/master/ceph/src/osd/PG.cc: In function 'static int PG::peek_map_epoch(ObjectStore*, spg_t        , epoch_t*, ceph::bufferlist*)' thread b66c6000 time 2018-06-26 15:09:16.046701
2056136 /home/ceph01/projects/master/ceph/src/osd/PG.cc: 3136: FAILED assert(values.size() == 2)


Source code v12.2.4

int PG::peek_map_epoch(ObjectStore *store,
      spg_t pgid,
      epoch_t *pepoch,
      bufferlist *bl)
{
  …
  set<string> keys;
  keys.insert(infover_key);
  keys.insert(epoch_key);
  map<string,bufferlist> values;
  int r = store->omap_get_values(coll, pgmeta_oid, keys, &values);
  if (r == 0) {
    assert(values.size() == 2);

}
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux