Re: ceph pg stuck - missing on 1 osd how to proceed

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



We know very little about the whole cluster, can you add the usual information like 'ceph -s' and 'ceph osd df tree'? Scrubbing has nothing to do with the undersized PGs. Is the balancer and/or autoscaler on? Please also add 'ceph balancer status' and 'ceph osd pool autoscale-status'.
Thanks,
Eugen

Zitat von xadhoom76@xxxxxxxxx:

Hi, the system is still in backfilling and still have the same pg in degraded. I see that % of degraded object is in still.
I mean it never decrease belove 0.010% from days.
Is the backfilling connected to the degraded ?
System must finish backfilling before finishing the degraded one ?

[WRN] PG_DEGRADED: Degraded data redundancy: 84469/826401567 objects degraded (0.010%), 1 pg degraded, 1 pg undersized pg 8.283 is stuck undersized for 92m, current state active+undersized+degraded+remapped+backfilling, last acting [17,59]

And stopping the scrub lead to inconsistent pgs.....

Thanks for any help.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux