If you can follow the documentation here: http://ceph.com/docs/master/rados/operations/monitoring-osd-pg/ and http://ceph.com/docs/master/rados/troubleshooting/ to provide some additional information, we may be better able to help you. For example, "ceph osd tree" would help us understand the status of your cluster a bit better. On Thu, May 16, 2013 at 10:32 PM, Olivier Bonvalet <ceph.list@xxxxxxxxx> wrote: > Le mercredi 15 mai 2013 à 00:15 +0200, Olivier Bonvalet a écrit : >> Hi, >> >> I have some PG in state down and/or incomplete on my cluster, because I >> loose 2 OSD and a pool was having only 2 replicas. So of course that >> data is lost. >> >> My problem now is that I can't retreive a "HEALTH_OK" status : if I try >> to remove, read or overwrite the corresponding RBD images, near all OSD >> hang (well... they don't do anything and requests stay in a growing >> queue, until the production will be done). >> >> So, what can I do to remove that corrupts images ? >> >> _______________________________________________ >> ceph-users mailing list >> ceph-users@xxxxxxxxxxxxxx >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com >> > > Up. Nobody can help me on that problem ? > > Thanks, > > Olivier > > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com -- John Wilkins Senior Technical Writer Intank john.wilkins@xxxxxxxxxxx (415) 425-9599 http://inktank.com _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com