If you watch `ceph -w` while stopping the OSD, do you see
2014-12-02 11:45:17.715629 mon.0 [INF] osd.X marked itself down
?
On Tue, Dec 2, 2014 at 11:06 AM, Christoph Adomeit <Christoph.Adomeit@xxxxxxxxxxx> wrote:
Thanks Craig,
but this is what I am doing.
After setting "ceph osd set noout" I do a "service ceph stop osd.51"
and as soon as I do this I get growing numbers (200) of slow requests,
although there is not a big load on my cluster.
Christoph
On Tue, Dec 02, 2014 at 10:40:13AM -0800, Craig Lewis wrote:
> I've found that it helps to shut down the osds before shutting down the
> host. Especially if the node is also a monitor. It seems that some OSD
> shutdown messages get lost while monitors are holding elections.
>
> On Tue, Dec 2, 2014 at 10:10 AM, Christoph Adomeit <
> Christoph.Adomeit@xxxxxxxxxxx> wrote:
>
> > Hi there,
> >
> > I have a giant cluster with 60 OSDs on 6 OSD Hosts.
> >
> > Now I want to do maintenance on one of the OSD Hosts.
> >
> > The documented Procedure is to "ceph osd set noout" and then shutdown
> > the OSD Node for maintenance.
> >
> > However, as soon as I even shut down 1 OSD I get around 200 slow requests
> > and the number of slow requests is growing for minutes.
> >
> > The test was done at night with low IOPS and I was expecting the cluster
> > to handle this condition much better.
> >
> > Is there some way of a more graceful shutdown of OSDs so that I can prevent
> > those slow requests ? I suppose it takes some time until monitor gets
> > notified that an OSD was shutdown.
> >
> > Thanks
> > Christoph
> >
> >
> >
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@xxxxxxxxxxxxxx
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com