Re: OSD upgrades

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



"reweight 0" and "out" are the exact same thing


Paul

-- 
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90


On Tue, Jun 2, 2020 at 9:30 AM Wido den Hollander <wido@xxxxxxxx> wrote:

>
>
> On 6/2/20 5:44 AM, Brent Kennedy wrote:
> > We are rebuilding servers and before luminous our process was:
> >
> >
> >
> > 1.       Reweight the OSD to 0
> >
> > 2.       Wait for rebalance to complete
> >
> > 3.       Out the osd
> >
> > 4.       Crush remove osd
> >
> > 5.       Auth del osd
> >
> > 6.       Ceph osd rm #
> >
> >
> >
> > Seems the luminous documentation says that you should:
> >
> > 1.       Out the osd
> >
> > 2.       Wait for the cluster rebalance to finish
> >
> > 3.       Stop the osd
> >
> > 4.       Osd purge #
> >
> >
> >
> > Is reweighting to 0 no longer suggested?
> >
> >
> >
> > Side note:  I tried our existing process and even after reweight, the
> entire
> > cluster restarted the balance again after step 4 ( crush remove osd ) of
> the
> > old process.  I should also note, by reweighting to 0, when I tried to
> run
> > "ceph osd out #", it said it was already marked out.
> >
> >
> >
> > I assume the docs are correct, but just want to make sure since
> reweighting
> > had been previously recommended.
>
> The new commands just make it more simple. There are many ways to
> accomplish the same goal, but what the docs describe should work in most
> scenarios.
>
> Wido
>
> >
> >
> >
> > Regards,
> >
> > -Brent
> >
> >
> >
> > Existing Clusters:
> >
> > Test: Nautilus 14.2.2 with 3 osd servers, 1 mon/man, 1 gateway, 2 iscsi
> > gateways ( all virtual on nvme )
> >
> > US Production(HDD): Nautilus 14.2.2 with 11 osd servers, 3 mons, 4
> gateways,
> > 2 iscsi gateways
> >
> > UK Production(HDD): Nautilus 14.2.2 with 12 osd servers, 3 mons, 4
> gateways
> >
> > US Production(SSD): Nautilus 14.2.2 with 6 osd servers, 3 mons, 3
> gateways,
> > 2 iscsi gateways
> >
> >
> >
> >
> >
> >
> >
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@xxxxxxx
> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
> >
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux