On Tue, Aug 19, 2014 at 1:22 AM, Riederer, Michael <Michael.Riederer at br.de> wrote: > > > root at ceph-admin-storage:~# ceph pg force_create_pg 2.587 > pg 2.587 now creating, ok > root at ceph-admin-storage:~# ceph pg 2.587 query > ... > "probing_osds": [ > "5", > "8", > "10", > "13", > "20", > "35", > "46", > "56"], > ... > > All mentioned osds "probing_osds" are up and in, but the cluster can not > create the pg. Not even scrub, deep-scrub or repair it. > My experience is that as long as you have down_osds_we_would_probe in the pg query, ceph pg force_create_pg won't do anything. ceph osd lost didn't help. The PGs would go into the creating state, then revert to incomplete. The only way I was able to get them to stay in the creating state was to re-create all of the OSD IDs listed in down_osds_we_would_probe. Even then, it wasn't deterministic. I issued the ceph pg force_create_pg, and it actually took effect sometime in the middle of the night, after an unrelated OSD went down and up. It was a very frustrating experience. > Just to be sure, that I did it the right way: > # stop ceph-osd id=x > # ceph osd out x > # ceph osd crush remove osd.x > # ceph auth del osd.x > # ceph osd rm x > My procedure was the same as yours, with the addition of a ceph osd lost x before ceph osd rm. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140819/6bbc3f00/attachment.htm>