Re: activating+undersized+degraded+remapped

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



You may be suffering from the "crush gives up too soon" situation:

https://docs.ceph.com/en/quincy/rados/troubleshooting/troubleshooting-pg/#crush-gives-up-too-soon

You have a 5+3 with only 8 hosts, you may need to increase your crush
tries. See the link for how to fix

Respectfully,

*Wes Dillingham*
wes@xxxxxxxxxxxxxxxxx
LinkedIn <http://www.linkedin.com/in/wesleydillingham>


On Sun, Mar 17, 2024 at 8:18 AM Joachim Kraftmayer - ceph ambassador <
joachim.kraftmayer@xxxxxxxxx> wrote:

> also helpful is the output of:
>
> cephpg{poolnum}.{pg-id}query
>
> ___________________________________
> ceph ambassador DACH
> ceph consultant since 2012
>
> Clyso GmbH - Premier Ceph Foundation Member
>
> https://www.clyso.com/
>
> Am 16.03.24 um 13:52 schrieb Eugen Block:
> > Yeah, the whole story would help to give better advice. With EC the
> > default min_size is k+1, you could reduce the min_size to 5
> > temporarily, this might bring the PGs back online. But the long term
> > fix is to have all required OSDs up and have enough OSDs to sustain an
> > outage.
> >
> > Zitat von Wesley Dillingham <wes@xxxxxxxxxxxxxxxxx>:
> >
> >> Please share "ceph osd tree" and "ceph osd df tree" I suspect you
> >> have not
> >> enough hosts to satisfy the EC
> >>
> >> On Sat, Mar 16, 2024, 8:04 AM Deep Dish <deeepdish@xxxxxxxxx> wrote:
> >>
> >>> Hello
> >>>
> >>> I found myself in the following situation:
> >>>
> >>> [WRN] PG_AVAILABILITY: Reduced data availability: 3 pgs inactive
> >>>
> >>>     pg 4.3d is stuck inactive for 8d, current state
> >>> activating+undersized+degraded+remapped, last acting
> >>> [4,NONE,46,NONE,10,13,NONE,74]
> >>>
> >>>     pg 4.6e is stuck inactive for 9d, current state
> >>> activating+undersized+degraded+remapped, last acting
> >>> [NONE,27,77,79,55,48,50,NONE]
> >>>
> >>>     pg 4.cb is stuck inactive for 8d, current state
> >>> activating+undersized+degraded+remapped, last acting
> >>> [6,NONE,42,8,60,22,35,45]
> >>>
> >>>
> >>> I have one cephfs with two backing pools -- one for replicated data,
> >>> the
> >>> other for erasure data.  Each pool is mapped to REPLICATED/ vs.
> >>> ERASURE/
> >>> directories on the filesystem.
> >>>
> >>>
> >>> The above pgs. are affecting the ERASURE pool (5+3) backing the
> >>> FS.   How
> >>> can I get ceph to recover these three PGs?
> >>>
> >>>
> >>>
> >>> Thank you.
> >>> _______________________________________________
> >>> ceph-users mailing list -- ceph-users@xxxxxxx
> >>> To unsubscribe send an email to ceph-users-leave@xxxxxxx
> >>>
> >> _______________________________________________
> >> ceph-users mailing list -- ceph-users@xxxxxxx
> >> To unsubscribe send an email to ceph-users-leave@xxxxxxx
> >
> >
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@xxxxxxx
> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux