I am in the process of doing exactly what you are
-- this worked for me:
1. mount the first partition of the bluestore drive that
holds the missing PGs (if it's not already mounted)
> mkdir /mnt/tmp
> mount /dev/sdb1 /mnt/tmp
2. export the pg to a suitable temporary storage location:
> ceph-objectstore-tool --data-path /mnt/tmp --pgid 1.24
--op export --file /mnt/sdd1/recover.1.24
3. find the acting osd
> ceph health detail |grep incomplete
PG_DEGRADED Degraded data redundancy: 23 pgs unclean, 23
pgs incomplete
pg 1.24 is incomplete, acting [18,13]
pg 4.1f is incomplete, acting [11]
...
4. set noout
> ceph osd set noout
5. Find the OSD and log into it -- I used 18 here.
> ceph osd find 18
{
"osd": 18,
"crush_location": {
"building": "building-dc",
"chassis": "chassis-dc400f5-10",
"city": "city",
"floor": "floor-dc4",
"host": "stor-vm4",
"rack": "rack-dc400f5",
"region": "cfl",
"room": "room-dc400",
"root": "default",
"row": "row-dc400f"
}
}
6. copy the file to somewhere accessible by the new(acting)
osd
7. stop the osd
> service ceph-osd@18 stop
8. import the file using ceph-objectstore-tool
> ceph-objectstore-tool --data-path
/var/lib/ceph/osd/ceph-18 --op import --file /tmp/recover.1.24
9. start the osd
> service-osd@18 start
this worked for me -- not sure if this is the best way or
if I took any extra steps and I have yet to validate that the
data is good.