How to force lost PGs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




I created a pool with no replication and an RBD within that pool. I mapped the RBD to a machine, formatted it with a file system and dumped data on it.

Just to see what kind of trouble I can get into, I stopped the OSD the RBD was using, marked the OSD as out, and reformatted the OSD tree.

When I brought the OSD back up, I now have three stale PGs.

Now I'm trying to clear the stale PGs. I've tried removing the OSD from the crush maps, the OSD lists etc, without any luck.

Running
  ceph pg 3.1 query
  ceph pg 3.1 mark_unfound_lost revert
ceph explains it doesn't have a PG 3.1

Running
 ceph osd repair osd.1
hangs after pg 2.3e

Running
  ceph osd lost 1 --yes-i-really-mean-it
nukes the osd.  Rebuilding osd.1 goes fine, but I still have 3 stale PGs.

Any help clearing these stale pages would be appreciated.

Thanks,
-Gaylord
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux