Hi, Can you share the output of: #ceph pg dump pgs | grep ^3\.1d5b Thx On Tue, Jul 13, 2021 at 12:41 PM Andres Rojas Guerrero <a.rojas@xxxxxxx> wrote: > Hi, recently in a Nautilus cluster version 14.2.6 I have changed the > rule crush map to host type instead osd, all seems Ok, but now I have > "PG not deep-scrubbed in time" with all pgs in active+clean state: > > # ceph status > cluster: > id: c74da5b8-3d1b-483e-8b3a-739134db6cf8 > health: HEALTH_WARN > 8192 pgs not deep-scrubbed in time > 8192 pgs not scrubbed in time > > services: > mon: 3 daemons, quorum ceph2mon01,ceph2mon02,ceph2mon03 (age 2w) > mgr: ceph2mon01(active, since 4w), standbys: ceph2mon02, ceph2mon03 > mds: nxtclfs:1 {0=ceph2mon01=up:active} 2 up:standby > osd: 768 osds: 768 up (since 12d), 768 in (since 12d) > > data: > pools: 2 pools, 16384 pgs > objects: 38.01M objects, 43 TiB > usage: 71 TiB used, 2.7 PiB / 2.7 PiB avail > pgs: 16384 active+clean > > If I try no make a deep-scrub of one of the PG not deep-scrubbed in time > I have the error "pg .... has no primary osd" > > # ceph pg deep-scrub 3.1d5b > Error EAGAIN: pg 3.1d5b has no primary osd > > What could be the cause of the error? > > > > -- > ******************************************************* > Andrés Rojas Guerrero > Unidad Sistemas Linux > Area Arquitectura Tecnológica > Secretaría General Adjunta de Informática > Consejo Superior de Investigaciones Científicas (CSIC) > Pinar 19 > 28006 - Madrid > Tel: +34 915680059 -- Ext. 444059 > email: a.rojas@xxxxxxx > ID comunicate.csic.es: @50852720l:matrix.csic.es > ******************************************************* > > _______________________________________________ > ceph-users mailing list -- ceph-users@xxxxxxx > To unsubscribe send an email to ceph-users-leave@xxxxxxx > _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx