Re: One pg stuck in active+undersized+degraded after OSD down

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Tx. # ceph version
ceph version 15.2.7 (88e41c6c49beb18add4fdb6b4326ca466d931db8) octopus
(stable)



On Thu, Nov 18, 2021 at 3:28 PM Stefan Kooman <stefan@xxxxxx> wrote:

> On 11/18/21 13:20, David Tinker wrote:
> > I just grepped all the OSD pod logs for error and warn and nothing comes
> up:
> >
> > # k logs -n rook-ceph rook-ceph-osd-10-659549cd48-nfqgk  | grep -i warn
> > etc
> >
> > I am assuming that would bring back something if any of them were
> unhappy.
>
> Your issue looks similar to another thread last week (thread pg
> inactive+remapped).
>
> What Ceph version are you running?
>
> I don't know if enabling debugging on osd.7 would reveal something
>
> Maybe recovery can be trigger by moving the primary to another OSD with
> pg upmap. Check your failure domain to see what OSD would be suitable.
>
> Gr. Stefan
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux