Re: ceph status reporting non-existing osd

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, 13 Jul 2012, Gregory Farnum wrote:
> On Fri, Jul 13, 2012 at 1:17 AM, Andrey Korolyov <andrey@xxxxxxx> wrote:
> > Hi,
> >
> > Recently I`ve reduced my test suite from 6 to 4 osds at ~60% usage on
> > six-node,
> > and I have removed a bunch of rbd objects during recovery to avoid
> > overfill.
> > Right now I`m constantly receiving a warn about nearfull state on
> > non-existing osd:
> >
> >    health HEALTH_WARN 1 near full osd(s)
> >    monmap e3: 3 mons at
> > {0=192.168.10.129:6789/0,1=192.168.10.128:6789/0,2=192.168.10.127:6789/0},
> > election epoch 240, quorum 0,1,2 0,1,2
> >    osdmap e2098: 4 osds: 4 up, 4 in
> >     pgmap v518696: 464 pgs: 464 active+clean; 61070 MB data, 181 GB
> > used, 143 GB / 324 GB avail
> >    mdsmap e181: 1/1/1 up {0=a=up:active}
> >
> > HEALTH_WARN 1 near full osd(s)
> > osd.4 is near full at 89%
> >
> > Needless to say, osd.4 remains only in ceph.conf, but not at crushmap.
> > Reducing has been done 'on-line', e.g. without restart entire cluster.
> 
> Whoops! It looks like Sage has written some patches to fix this, but
> for now you should be good if you just update your ratios to a larger
> number, and then bring them back down again. :)

Restarting ceph-mon should also do the trick.

Thanks for the bug report!
sage
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux