Problem with Stale+Perring PGs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

we got currently the problem that our ceph-cluster has 5 pgs that are stale+peering.
the cluster is a 16 OSD, 4 Host Cluster.

how can i tell ceph that these pgs are not there anymore?


ceph pg 1.20a mark_unfound_lost delete
Error ENOENT: i don't have pgid 1.20a


ceph pg 1.20a mark_unfound_lost revert
Error ENOENT: i don't have pgid 1.20a


ceph pg dump_stuck stale
ok
PG_STAT STATE         UP   UP_PRIMARY ACTING ACTING_PRIMARY
1.327   stale+peering [12]         12   [12]             12
1.3a8   stale+peering [12]         12   [12]             12
1.38f   stale+peering [12]         12   [12]             12
1.20a   stale+peering [12]         12   [12]             12
1.288   stale+peering [12]         12   [12]             12


OSD.12 was removed and is back the cluster with a new drive.

kind regards
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux