> Due to a mistake on my part, I accidentally destroyed more OSDs that I needed to, and I ended up with 2 pgs in “incomplete” state. > Doing “ceph pg query on one of the pgs that is incomplete, I get the following (somewhere in the output): > > "up": [ > 12, > 6, > 20 > ], > "acting": [ > 12, > 6, > 20 > ], > "avail_no_missing": [], > "object_location_counts": [], > "blocked_by": [ > 3, > 4, > 5 > ], > "up_primary": 12, > "acting_primary": 12, > "purged_snaps": [] > > > I am assuming this means that OSDs 3,4,5 were the original ones (that are now destroyed), but I don’t understand why the output shows 12, 6, 20 as active. I can't help with the cephfs part since we don't use that, but I think the above output means "since 3,4,5 are gone, 12,6 and 20 are now designated as the replacement OSDs to hold the PG", but since 3,4,5 are gone, none of them can backfill into 12,6,20, so 12,6,20 are waiting for this PG to appear "somewhere" so they can recover. Perhaps you can force pg creation, so that 12,6,20 gets an empty PG to start the pool again, and then hope that the next rsync will fill in any missing slots, but this part I am not so sure about since I don't know what other data apart from file contents may exist in a cephfs pool. Is the worst-case (dropping the pool, recreating it and running a full rsync again) a possible way out? If so, you can perhaps test and see if you can bridge the gap of the missing PGs, but if resyncing is out, then wait for suggestions from someone more qualified at cephfs stuff than me. ;) -- May the most significant bit of your life be positive. _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx