So - here is the feedback. After a long night... The plain copying did not help... it then complains about the Snaps of another VM (also with old Snapshots). I remembered about a thread I read that the problem could solved by converting back to filestore, because you then have access of the data in filesystem. So I did that for the 3 OSDs affected. After that, of course (aaaargh), the PG got located on other OSDs - but at least one was still on a filestore converted OSD. So I first set the primary affinity in a way that the PG was primary on the filestore OSD. Then I quickly turned off all three OSDs. The PG got stale then (all replicas were down). Flushed the journals to be on the safe side. Then I took a detailed look in the filesystem (with find) and found the rbd_data.2313975238e1f29.000XXX, which was size 0. So no data in it. I then used > ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-X rbd_data.2313975238e1f29.000XXX remove on all three OSDs and fired them up again. Then - after waiting for the cluster to get balanced again (PG still reported as inconsistent) - I fired up a repair on the PG (primary still on the filestore OSD). -> Fixed. :-) HEALTHY This night I will set the OSD up as BlueStore again. Hopefully it will not happen again. I found in a bug report the tip to set "bluefs_allocator = stupid" in ceph.conf. I also did that and restarted all OSDs afterwards. So maybe this prevents the problem to happen again. Best Karsten On 20.02.2018 16:03, Eugen Block wrote: > Alright, good luck! > The results would be interesting. :-) Ecologic Institut gemeinnuetzige GmbH Pfalzburger Str. 43/44, D-10717 Berlin Geschaeftsfuehrerin / Director: Dr. Camilla Bausch Sitz der Gesellschaft / Registered Office: Berlin (Germany) Registergericht / Court of Registration: Amtsgericht Berlin (Charlottenburg), HRB 57947 _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com