On Sun, 10 Nov 2019, ceph@xxxxxxxxxx wrote: > IIRC there is a ~history_ignore Option which could be Help in your Test > environment. This option is dangerous and can lead to data loss if used incorrectly. I suggest making backups of all PG instances with ceph-objectstore-tool before using it. More importantly, we haven't be able to pin down the root cause of this issue. If anyone is able to reproduce this with logs you will be my hero forever! sage > > Hth > Mehmet > > Am 14. Oktober 2019 17:41:42 MESZ schrieb Huseyin Cotuk <hcotuk@xxxxxxxxx>: > >Hi all, > > > >I also hit the bug #24866 in my test environment. According to the > >logs, the last_clean_epoch in the specified OSD/PG is 17703, but the > >interval starts with 17895. So the OSD fails to start. There are some > >other OSDs in the same status. > > > >2019-10-14 18:22:51.908 7f0a275f1700 -1 osd.21 pg_epoch: 18432 > >pg[18.51( v 18388'4 lc 18386'3 (0'0,18388'4] local-lis/les=18430/18431 > >n=1 ec=295/295 lis/c 18430/17702 les/c/f 18431/17703/0 > >18428/18430/18421) [11,21]/[11,21,20] r=1 lpr=18431 pi=[17895,18430)/3 > >crt=18388'4 lcod 0'0 unknown m=1 mbc={}] 18.51 past_intervals > >[17895,18430) start interval does not contain the required bound > >[17703,18430) start > > > >The cause is pg 18.51 went clean in 17703 but 17895 is reported to the > >monitor. > > > >I am using the last stable version of Mimic (13.2.6). > > > >Any idea how to fix it? Is there any way to bypass this check or fix > >the reported epoch #? > > > >Thanks in advance. > > > >Best regards, > >Huseyin Cotuk > >hcotuk@xxxxxxxxx > _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx