Re: 1 bogus remapped PG (stuck pg_temp) -- how to cleanup?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

On 2/2/22 14:39, Konstantin Shalygin wrote:
Hi,

The cluster is Nautilus 14.2.22

For a long time we have bogus 1 remapped PG, without actual 'remapped' PG's

# ceph pg dump pgs_brief | awk '{print $2}' | grep active | sort | uniq -c
dumped pgs_brief
   15402 active+clean
       6 active+clean+scrubbing


# ceph osd dump | grep pg_temp
pg_temp 4.9f1 [358,331,374]    <---------------------


# ceph pg 4.9f1 query
Error ENOENT: i don't have pgid 4.9f1

I was try to restart mons, mgrs, osd's for this pool
Does anyone know recipe for clearing this pg_temp?


We have the same problem:

  services:
    osd: 454 osds: 453 up (since 19m), 453 in (since 2d); 1 remapped pgs

  data:
    pgs:     7372 active+clean
             90   active+clean+scrubbing
             43   active+clean+scrubbing+deep

# ceph osd dump | grep pg_temp
pg_temp 96.7d [26,125,40]


Pool 96 currently has 32 PGs, far less than 128 (7d is 125 decimal). The autoscaler is active on this cluster, but I'm not sure whether this pool was shrunk in the past.


Regards,

Burkhard


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux