On Mon, Oct 14, 2019 at 1:27 PM Florian Haas <florian@xxxxxxxxxxxxxx> wrote: > > On 14/10/2019 13:20, Dan van der Ster wrote: > > Hey Florian, > > > > What does the ceph.log ERR or ceph-osd log show for this inconsistency? > > > > -- Dan > > Hi Dan, > > what's in the log is (as far as I can see) consistent with the pg query > output: > > 2019-10-14 08:33:57.345 7f1808fb3700 0 log_channel(cluster) log [DBG] : > 10.10d scrub starts > 2019-10-14 08:33:57.345 7f1808fb3700 -1 log_channel(cluster) log [ERR] : > 10.10d scrub : stat mismatch, got 0/1 objects, 0/0 clones, 0/1 dirty, > 0/0 omap, 0/0 pinned, 0/0 hit_set_archive, 0/0 whiteouts, 0/11 bytes, > 0/0 manifest objects, 0/0 hit_set_archive bytes. > 2019-10-14 08:33:57.345 7f1808fb3700 -1 log_channel(cluster) log [ERR] : > 10.10d scrub 1 errors > > Have you seen this before? Yes occasionally we see stat mismatches -- repair always fixes definitively though. Are you using PG autoscaling? There's a known issue there which generates stat mismatches. -- dan > > Cheers, > Florian _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx