Re: Ceph PGs stuck inactive after rebuild node

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Eugen,

Can you please elaborate on what you mean by "restarting the primary PG"?

Best regards,
Zakhar

On Wed, Apr 6, 2022 at 5:15 PM Eugen Block <eblock@xxxxxx> wrote:

> Update: Restarting the primary PG helped to bring the PGs back to
> active state. Consider this thread closed.
>
>
> Zitat von Eugen Block <eblock@xxxxxx>:
>
> > Hi all,
> >
> > I have a strange situation here, a Nautilus cluster with two DCs,
> > the main pool is an EC pool with k7 m11, min_size = 8 (failure
> > domain host). We confirmed failure resiliency multiple times for
> > this cluster, today we rebuilt one node resulting in currently 34
> > inactive PGs. I'm wondering why they are inactive though. It's quite
> > urgent and I'd like to get the PGs active again. Before rebuilding
> > we didn't drain it though, but this procedure has worked multiple
> > times in the past.
> > I haven't done too much damage yet, except for trying to force the
> > backfill of one PG (ceph pg force-backfill <PG>) to no avail yet.
> > Any pointers are highly appreciated!
> >
> > Regards,
> > Eugen
>
>
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux