Re: Scrub shutdown the OSD process

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Le lundi 15 avril 2013 à 10:57 -0700, Gregory Farnum a écrit :
> On Mon, Apr 15, 2013 at 10:19 AM, Olivier Bonvalet <ceph.list@xxxxxxxxx> wrote:
> > Le lundi 15 avril 2013 à 10:16 -0700, Gregory Farnum a écrit :
> >> Are you saying you saw this problem more than once, and so you
> >> completely wiped the OSD in question, then brought it back into the
> >> cluster, and now it's seeing this error again?
> >
> > Yes, it's exactly that.
> >
> >
> >> Are any other OSDs experiencing this issue?
> >
> > No, only this one have the problem.
> 
> Did you run scrubs while this node was out of the cluster? If you
> wiped the data and this is recurring then this is apparently an issue
> with the cluster state, not just one node, and any other primary for
> the broken PG(s) should crash as well. Can you verify by taking this
> one down and then doing a full scrub?
> -Greg
> Software Engineer #42 @ http://inktank.com | http://ceph.com
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 

Also note that no PG is marked "corrupted". I have only PG in "active
+remapped" or "active+degraded".

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux