On Sat, 28 Dec 2013, Corin Langosch wrote: > Hi guys, > > I got an incosistent pg and found it was due to a broken hdd. I marked this > osd out and > the cluster rebalanced without any problems. But the pg is still reported as > incosistent. > > Before marking osd 2 out: > > HEALTH_ERR 1 pgs inconsistent; 1 scrub errors; noout flag(s) set > pg 6.29f is active+clean+inconsistent, acting [8,2] > 1 scrub errors > noout flag(s) set > > After marking osd 2 out: > > HEALTH_ERR 1 pgs inconsistent; 1 scrub errors; noout flag(s) set > pg 6.29f is active+clean+inconsistent, acting [8,10] > 1 scrub errors > noout flag(s) set > > I tried to run "ceph osd repair 6.29f" but the command seems to expect an > osd-id rather > than an pg-id when using emperor. So I did "ceph osd repair 8" and got a > couple of > > 2013-12-28 14:50:05.301411 osd.8 [INF] XXX repair ok, 0 fixed > > messages, but the pg "6.29f" was not listed and is still marked as > incosistent. > > How can I get the cluster stable again? :) ceph pg scrub 6.29f ...and see if it comes back with errors or not. If it doesn't, you can ceph pg repair 6.29f to clear the inconsistent flag. sage _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com