Yeah, the whole story would help to give better advice. With EC the
default min_size is k+1, you could reduce the min_size to 5
temporarily, this might bring the PGs back online. But the long term
fix is to have all required OSDs up and have enough OSDs to sustain an
outage.
Zitat von Wesley Dillingham <wes@xxxxxxxxxxxxxxxxx>:
Please share "ceph osd tree" and "ceph osd df tree" I suspect you have not
enough hosts to satisfy the EC
On Sat, Mar 16, 2024, 8:04 AM Deep Dish <deeepdish@xxxxxxxxx> wrote:
Hello
I found myself in the following situation:
[WRN] PG_AVAILABILITY: Reduced data availability: 3 pgs inactive
pg 4.3d is stuck inactive for 8d, current state
activating+undersized+degraded+remapped, last acting
[4,NONE,46,NONE,10,13,NONE,74]
pg 4.6e is stuck inactive for 9d, current state
activating+undersized+degraded+remapped, last acting
[NONE,27,77,79,55,48,50,NONE]
pg 4.cb is stuck inactive for 8d, current state
activating+undersized+degraded+remapped, last acting
[6,NONE,42,8,60,22,35,45]
I have one cephfs with two backing pools -- one for replicated data, the
other for erasure data. Each pool is mapped to REPLICATED/ vs. ERASURE/
directories on the filesystem.
The above pgs. are affecting the ERASURE pool (5+3) backing the FS. How
can I get ceph to recover these three PGs?
Thank you.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx