One PG stuck in active+clean+remapped

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all,

I have one active+clean+remapped PG on a 152 OSD Octopus (15.2.15) cluster with equal balanced OSD's (around 40% usage). The cluster has three replicas spreaded around three datacenters (A+B+C).

All PGs are available in each datacenter (as defined in the crush map), but only this one (which is in a pool containing 2048 PGs) is up on OSD.34 and OSD.42 and acting on OSD.34, OSD.42 and OSD.38. 

OSD.34 is located in datacenter A, 42 in B and 38 in A again, but it should be in C.

I did restart all OSD's, monitors, managers and servers. I did out the OSDs that the PG is acting on and bring it back in a minute later. In all cases the PG holds the same state after backfilling, but one of the A replicas switches to another OSD in the A datacenter. I did turn off and on the balancer. But nothing seems to recover the PG to active+clean.

Any suggestions?

Regards,
Erwin
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux