Hello ceph-users Due to a mistake on my part, I accidentally destroyed more OSDs that I needed to, and I ended up with 2 pgs in “incomplete” state. Needless to say, since the OSDs are destroyed, there is no way of getting them back. Both pgs are on a CephFS data pool, and I have backups of all data (actually ceph *is* the backup). I would be happy removing the affected files and the next “rsync” will copy them over. In the state my filesystem is currently, I get (soft) hangs on some file accesses, and since I am doing rsyncs, I would be great if those hangs don’t happen. Doing “ceph pg query on one of the pgs that is incomplete, I get the following (somewhere in the output): "up": [ 12, 6, 20 ], "acting": [ 12, 6, 20 ], "avail_no_missing": [], "object_location_counts": [], "blocked_by": [ 3, 4, 5 ], "up_primary": 12, "acting_primary": 12, "purged_snaps": [] I am assuming this means that OSDs 3,4,5 were the original ones (that are now destroyed), but I don’t understand why the output shows 12, 6, 20 as active. How can I map the objects in the incomplete pgs to files, and how can I then remove them? Thank you! George _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx