Re: activating+undersized+degraded+remapped

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Please share "ceph osd tree" and "ceph osd df tree" I suspect you have not
enough hosts to satisfy the EC

On Sat, Mar 16, 2024, 8:04 AM Deep Dish <deeepdish@xxxxxxxxx> wrote:

> Hello
>
> I found myself in the following situation:
>
> [WRN] PG_AVAILABILITY: Reduced data availability: 3 pgs inactive
>
>     pg 4.3d is stuck inactive for 8d, current state
> activating+undersized+degraded+remapped, last acting
> [4,NONE,46,NONE,10,13,NONE,74]
>
>     pg 4.6e is stuck inactive for 9d, current state
> activating+undersized+degraded+remapped, last acting
> [NONE,27,77,79,55,48,50,NONE]
>
>     pg 4.cb is stuck inactive for 8d, current state
> activating+undersized+degraded+remapped, last acting
> [6,NONE,42,8,60,22,35,45]
>
>
> I have one cephfs with two backing pools -- one for replicated data, the
> other for erasure data.  Each pool is mapped to REPLICATED/ vs. ERASURE/
> directories on the filesystem.
>
>
> The above pgs. are affecting the ERASURE pool (5+3) backing the FS.   How
> can I get ceph to recover these three PGs?
>
>
>
> Thank you.
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux