Hi Craig, many thanks for your help. I decided to reinstall ceph. Regards, Mike ________________________________ Von: Craig Lewis [clewis at centraldesktop.com] Gesendet: Dienstag, 19. August 2014 22:24 An: Riederer, Michael Cc: ceph-users at lists.ceph.com Betreff: Re: [ceph-users] HEALTH_WARN 4 pgs incomplete; 4 pgs stuck inactive; 4 pgs stuck unclean On Tue, Aug 19, 2014 at 1:22 AM, Riederer, Michael <Michael.Riederer at br.de<mailto:Michael.Riederer at br.de>> wrote: root at ceph-admin-storage:~# ceph pg force_create_pg 2.587 pg 2.587 now creating, ok root at ceph-admin-storage:~# ceph pg 2.587 query ... "probing_osds": [ "5", "8", "10", "13", "20", "35", "46", "56"], ... All mentioned osds "probing_osds" are up and in, but the cluster can not create the pg. Not even scrub, deep-scrub or repair it. My experience is that as long as you have down_osds_we_would_probe in the pg query, ceph pg force_create_pg won't do anything. ceph osd lost didn't help. The PGs would go into the creating state, then revert to incomplete. The only way I was able to get them to stay in the creating state was to re-create all of the OSD IDs listed in down_osds_we_would_probe. Even then, it wasn't deterministic. I issued the ceph pg force_create_pg, and it actually took effect sometime in the middle of the night, after an unrelated OSD went down and up. It was a very frustrating experience. Just to be sure, that I did it the right way: # stop ceph-osd id=x # ceph osd out x # ceph osd crush remove osd.x # ceph auth del osd.x # ceph osd rm x My procedure was the same as yours, with the addition of a ceph osd lost x before ceph osd rm. -------------------------------------------------------------------------------------------------- Bayerischer Rundfunk; Rundfunkplatz 1; 80335 M?nchen Telefon: +49 89 590001; E-Mail: info at BR.de; Website: http://www.BR.de -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140822/bcc75b33/attachment.htm>