yes the pool on the testcluster contains a lot of objects I created a new pool, put the object (this time only 100K, just to test it) and run a deep-scrub -> error # dd if=/dev/urandom of=test_obj bs=1K count=100 # rados -p nameplosion put c76c7ac2014adb9f0f0837ac1e85fd1e241af225908b6a0c3d3a44d6b866e732_00400000 test_obj # ceph osd map nameplosion c76c7ac2014adb9f0f0837ac1e85fd1e241af225908b6a0c3d3a44d6b866e732_00400000 osdmap e2016317 pool 'nameplosion' (7) object 'c76c7ac2014adb9f0f0837ac1e85fd1e241af225908b6a0c3d3a44d6b866e732_00400000' -> pg 7.ffffffff (7.3ff) -> up ([123,87,85], p123) acting ([123,87,85], p123) # ceph pg deep-scrub 7.3ff and here the ceph-osd.123.log snipped 2022-02-10T14:12:13.287+0100 7f9f792ad700 -1 log_channel(cluster) log [ERR] : 7.3ff deep-scrub : stat mismatch, got 0/1 objects, 0/0 clones, 0/1 dirty, 0/0 omap, 0/0 pinned, 0/0 hit_set_archive, 0/0 whiteouts, 0/102400 bytes, 0/0 manifest objects, 0/0 hit_set_archive bytes. 2022-02-10T14:12:13.287+0100 7f9f792ad700 -1 log_channel(cluster) log [ERR] : 7.3ff deep-scrub 1 errors Manuel On Thu, 10 Feb 2022 15:39:49 +0300 Igor Fedotov <igor.fedotov@xxxxxxxx> wrote: > Speaking of test cluster - there are multiple objects in the test pool, > right? > > If so could you please create a new pool and put just a single object > with th problematic name there. Then do the deep scrub. Is the issue > reproducible this way? > > > Thanks, > > Igor > _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx