Need a procedure for corrupted pg_log repair using ceph-kvstore-tool

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Can someone provide information about what to look for (and how to modify the related leveldb keys) in case of such error leading to OSD crash

    -5> 2018-09-10 14:46:30.896130 7efff657dd00 20 read_log_and_missing 712021'566147 (656569'562061) delete   28:b2d84df6:::rbd_data.423c863d6f7d13.000000000000071c:head by client.442854982.0:26349 2018-08-22 12:45:48.366430 0
    -4> 2018-09-10 14:46:30.896135 7efff657dd00 20 read_log_and_missing 712021'566148 (396232'430937) modify   28:b2a8dfc4:::rbd_data.1279a2016dd7ff07.0000000000001715:head by client.375380018.0:66926373 2018-08-22 13:53:42.891543 0
    -3> 2018-09-10 14:46:30.896140 7efff657dd00 20 read_log_and_missing 712021'566149 (455388'436624) modify   28:b2e5c03b:::rbd_data.c3b0cd3fe98040.0000000000000dd1:head by client.357924238.0:32177266 2018-08-22 12:40:20.290431 0
    -2> 2018-09-10 14:46:30.896145 7efff657dd00 20 read_log_and_missing 712021'566150 (455452'436627) modify   28:b2be4e96:::rbd_data.c3b0cd3fe98040.0000000000000e8e:head by client.357924238.0:32178303 2018-08-22 13:51:03.149459 0
    -1> 2018-09-10 14:46:30.896153 7efff657dd00 20 read_log_and_missing 714416'1 (0'0) error    28:b2b68805:::rbd_data.516e3914fdc210.0000000000001993:head by client.441544789.0:109624 0.000000 -2
     0> 2018-09-10 14:46:30.897918 7efff657dd00 -1 /build/ceph-12.2.7/src/osd/PGLog.h: In function 'static void PGLog::read_log_and_missing(ObjectStore*, coll_t, coll_t, ghobject_t, const pg_info_t&, PGLog::IndexedLog&, missing_type&, bool, std::ostringstream&, bool, bool*, const DoutPrefixProvider*, std::set<std::basic_string<char> >*, bool) [with missing_type = pg_missing_set<true>; std::ostringstream = std::basic_ostringstream<char>]' thread 7efff657dd00 time 2018-09-10 14:46:30.896158
/build/ceph-12.2.7/src/osd/PGLog.h: 1354: FAILED assert(last_e.version.version < e.version.version)


The ceph version is 12.2.7, and the current problem is a consequence of multiple crashes of numerous OSDs due to some other ceph error.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux