Re: 6 PG's stuck not-active, remapped

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



As an example, here's the acting and up set of one of the PG's:






















*up: 0: 1131: 1382: 303: 1324: 1055: 576: 1067: 1408: 161acting: 0: 721:
1502: 21474836473: 21474836474: 245: 486: 327: 1578: 103*
So obviously there's a lot of backfilling there... but it seems it's not
making any progress.
Mac Wynkoop, Senior Datacenter Engineer
*NetDepot.com:* Cloud Servers; Delivered
Houston | Atlanta | NYC | Colorado Springs

1-844-25-CLOUD Ext 806




On Wed, Oct 21, 2020 at 2:41 PM Mac Wynkoop <mwynkoop@xxxxxxxxxxxx> wrote:

> We recently did some work on the Ceph cluster, and a few disks ended up
> offline at the same time. There are now 6 PG's that are stuck in a
> "remapped" state, and this is all of their recovery states:
>
>
>
>
>
>
>
>
>
> *recovery_state: 0: name: Started/Primary/WaitActingChangeenter_time:
> 2020-10-21 18:48:02.034430comment: waiting for pg acting set to change1:
> name: Startedenter_time: 2020-10-21 18:48:01.752957*
> Any ideas?
>
> Mac Wynkoop
>
>
>
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux