Re: ceph pg stuck - missing on 1 osd how to proceed

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi

You can use the script available at
https://github.com/TheJJ/ceph-balancer/blob/master/placementoptimizer.py to
check the status of backfilling and PG state, and also to cancel
backfilling using upmap. To view the movement status of all PGs in the
backfilling state, you can execute the command "placementoptimizer.py
showremapped"

On Fri, Apr 14, 2023 at 11:20 AM Eugen Block <eblock@xxxxxx> wrote:

> Hi,
> your cluster is in backfilling state, maybe just wait for the backfill
> to finish? What is 'ceph -s' reporting? The PG could be backfilling to
> a different OSD as well. You could query the PG to see more details
> ('ceph pg 8.2a6 query').
> By the way, the PGs you show are huge (around 174 GB with more than
> 200k objects), depending on the disks you use a split could help gain
> more performance (if that is an issue for you).
>
> Regards,
> Eugen
>
> Zitat von xadhoom76@xxxxxxxxx:
>
> > Hi to all
> > Using ceph 17.2.5 i have 3 pgs in stuck state
> >
> > ceph pg map 8.2a6
> > osdmap e32862 pg 8.2a6 (8.2a6) -> up [88,100,59] acting [59,100]
> > looking at it ho 88 ,100 and 59 i got that
> >
> >
> > ceph pg ls-by-osd osd.100 | grep 8.2a6
> > 8.2a6   211004    209089          0        0  174797925205
> >  0           0   7075
> > active+undersized+degraded+remapped+backfilling    21m
> > 32862'1540291   32862:3387785   [88,100,59]p88      [59,100]p59
> > 2023-03-12T08:08:00.903727+0000  2023-03-12T08:08:00.903727+0000
> >             6839  queued for deep scrub
> >
> > ceph pg ls-by-osd osd.59 | grep 8.2a6
> > 8.2a6   211005    209084          0        0  174798941087
> >  0           0   7076
> > active+undersized+degraded+remapped+backfilling      22m
> > 32862'1540292   32862:3387798   [88,100,59]p88      [59,100]p59
> > 2023-03-12T08:08:00.903727+0000  2023-03-12T08:08:00.903727+0000
> >             6839  queued for deep scrub
> >
> > BUT
> > ceph pg ls-by-osd osd.88 | grep 8.2a6 ---> NONE
> >
> > it is missing .... how to proceed ?
> > Best regards
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@xxxxxxx
> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux