On Mon, Apr 15, 2013 at 10:19 AM, Olivier Bonvalet <ceph.list@xxxxxxxxx> wrote: > Le lundi 15 avril 2013 à 10:16 -0700, Gregory Farnum a écrit : >> Are you saying you saw this problem more than once, and so you >> completely wiped the OSD in question, then brought it back into the >> cluster, and now it's seeing this error again? > > Yes, it's exactly that. > > >> Are any other OSDs experiencing this issue? > > No, only this one have the problem. Did you run scrubs while this node was out of the cluster? If you wiped the data and this is recurring then this is apparently an issue with the cluster state, not just one node, and any other primary for the broken PG(s) should crash as well. Can you verify by taking this one down and then doing a full scrub? -Greg Software Engineer #42 @ http://inktank.com | http://ceph.com _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com