Hi,
your cluster is in backfilling state, maybe just wait for the backfill
to finish? What is 'ceph -s' reporting? The PG could be backfilling to
a different OSD as well. You could query the PG to see more details
('ceph pg 8.2a6 query').
By the way, the PGs you show are huge (around 174 GB with more than
200k objects), depending on the disks you use a split could help gain
more performance (if that is an issue for you).
Regards,
Eugen
Zitat von xadhoom76@xxxxxxxxx:
Hi to all
Using ceph 17.2.5 i have 3 pgs in stuck state
ceph pg map 8.2a6
osdmap e32862 pg 8.2a6 (8.2a6) -> up [88,100,59] acting [59,100]
looking at it ho 88 ,100 and 59 i got that
ceph pg ls-by-osd osd.100 | grep 8.2a6
8.2a6 211004 209089 0 0 174797925205
0 0 7075
active+undersized+degraded+remapped+backfilling 21m
32862'1540291 32862:3387785 [88,100,59]p88 [59,100]p59
2023-03-12T08:08:00.903727+0000 2023-03-12T08:08:00.903727+0000
6839 queued for deep scrub
ceph pg ls-by-osd osd.59 | grep 8.2a6
8.2a6 211005 209084 0 0 174798941087
0 0 7076
active+undersized+degraded+remapped+backfilling 22m
32862'1540292 32862:3387798 [88,100,59]p88 [59,100]p59
2023-03-12T08:08:00.903727+0000 2023-03-12T08:08:00.903727+0000
6839 queued for deep scrub
BUT
ceph pg ls-by-osd osd.88 | grep 8.2a6 ---> NONE
it is missing .... how to proceed ?
Best regards
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx