Re: Orphan PG

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Marek Dohojda:

One of the Stuck Inactive is 0.21 and here is the output of ceph pg map

#ceph pg map 0.21
osdmap e579 pg 0.21 (0.21) -> up [] acting []

#ceph pg dump_stuck stale
ok
pg_stat state   up      up_primary      acting  acting_primary
0.22    stale+active+clean      [5,1,6] 5       [5,1,6] 5
0.1f    stale+active+clean      [2,0,4] 2       [2,0,4] 2
<reducted for ease of reading>

# ceph osd stat
     osdmap e579: 14 osds: 14 up, 14 in

If I do
#ceph pg 0.21 query

The command freezes and never returns any output.

I suspect that the problem is that these PGs were created but the OSD that they were initially created under disappeared.  So I believe that I should just remove these PGs, but honestly I don’t see how.

Does anybody have any ideas as to what to do next?

ceph pg force_create_pg 0.21

We've been playing last week with this same scenario: we stopped on purpose the 3 OSD with the replicas of one PG to find out how it affected to the cluster and we ended up with a stale PG and 400 requests blocked for a long time. After trying several commands to get the cluster back the one that made the difference was force_create_pg and later moving the OSD with blocked requests out of the cluster.

Hope that helps,
Alex
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux