Can you attach the output of: ceph -s ceph pg dump ceph osd dump and run ceph osd getmap -o /tmp/osdmap and attach /tmp/osdmap/ -Sam On Wed, Aug 7, 2013 at 1:58 AM, Howarth, Chris <chris.howarth@xxxxxxxx> wrote: > Hi, > > One of our OSD disks failed on a cluster and I replaced it, but when it > failed it did not completely recover and I have a number of pgs which are > stuck unclean: > > > > # ceph health detail > > HEALTH_WARN 7 pgs stuck unclean > > pg 3.5a is stuck unclean for 335339.172516, current state active, last > acting [5,4] > > pg 3.54 is stuck unclean for 335339.157608, current state active, last > acting [15,7] > > pg 3.55 is stuck unclean for 335339.167154, current state active, last > acting [16,9] > > pg 3.1c is stuck unclean for 335339.174150, current state active, last > acting [8,16] > > pg 3.a is stuck unclean for 335339.177001, current state active, last acting > [0,8] > > pg 3.4 is stuck unclean for 335339.165377, current state active, last acting > [17,4] > > pg 3.5 is stuck unclean for 335339.149507, current state active, last acting > [2,6] > > > > Does anyone know how to fix these ? I tried the following, but this does not > seem to work: > > > > # ceph pg 3.5 mark_unfound_lost revert > > pg has no unfound objects > > > > thanks > > > > Chris > > __________________________ > > Chris Howarth > > OS Platforms Engineering > > Citi Architecture & Technology Engineering > > (e) chris.howarth@xxxxxxxx > > (t) +44 (0) 20 7508 3848 > > (f) +44 (0) 20 7508 0964 > > (mail-drop) CGC-06-3A > > > > > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com