Hello, last week I've upgraded from 0.72.2 to last stable firefly 0.80 following the suggested procedure (upgrade in order monitors, OSDs, MDSs, clients) on my 2 different clusters. Everything is ok, I've HEALTH_OK on both, the only weird thing is that few PGs remain in active+clean+scrubbing. I've tried to query the PG and reboot the involved OSD daemons and hosts but the issue is still present and the involved PGs with +scrubbing state changes. I've tried as well to put noscrub on OSDs with "ceph osd set noscrub" nut nothing changed. What can I do? I attach the cluster statuses and their cluster maps: FIRST CLUSTER: health HEALTH_OK mdsmap e510: 1/1/1 up {0=ceph-mds1=up:active}, 1 up:standby osdmap e4604: 5 osds: 5 up, 5 in pgmap v138288: 1332 pgs, 4 pools, 117 GB data, 30178 objects 353 GB used, 371 GB / 724 GB avail 1331 active+clean 1 active+clean+scrubbing # id weight type name up/down reweight -1 0.84 root default -7 0.28 rack rack1 -2 0.14 host cephosd1-dev 0 0.14 osd.0 up 1 -3 0.14 host cephosd2-dev 1 0.14 osd.1 up 1 -8 0.28 rack rack2 -4 0.14 host cephosd3-dev 2 0.14 osd.2 up 1 -5 0.14 host cephosd4-dev 3 0.14 osd.3 up 1 -9 0.28 rack rack3 -6 0.28 host cephosd5-dev 4 0.28 osd.4 up 1 SECOND CLUSTER: health HEALTH_OK osdmap e158: 10 osds: 10 up, 10 in pgmap v9724: 2001 pgs, 6 pools, 395 MB data, 139 objects 1192 MB used, 18569 GB / 18571 GB avail 1998 active+clean 3 active+clean+scrubbing # id weight type name up/down reweight -1 18.1 root default -2 9.05 host wn-recas-uniba-30 0 1.81 osd.0 up 1 1 1.81 osd.1 up 1 2 1.81 osd.2 up 1 3 1.81 osd.3 up 1 4 1.81 osd.4 up 1 -3 9.05 host wn-recas-uniba-32 5 1.81 osd.5 up 1 6 1.81 osd.6 up 1 7 1.81 osd.7 up 1 8 1.81 osd.8 up 1 9 1.81 osd.9 up 1