Re: Right way to delete OSD from cluster?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi!

May be. But "ceph out + ceph osd purge" causes double relocation, and "ceph reweight 0 + ceph osd purge" - causes only one.

----- Original Message -----
From: "Paul Emmerich" <paul.emmerich@xxxxxxxx>
To: "Fyodor Ustinov" <ufm@xxxxxx>
Cc: "David Turner" <drakonstein@xxxxxxxxx>, "ceph-users" <ceph-users@xxxxxxxxxxxxxx>
Sent: Friday, 1 March, 2019 11:54:20
Subject: Re:  Right way to delete OSD from cluster?

"out" is internally implemented as "reweight 0"

Paul

-- 
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90

On Fri, Mar 1, 2019 at 10:48 AM Fyodor Ustinov <ufm@xxxxxx> wrote:
>
> Hi!
>
> As far as I understand, reweight also does not lead to the situation "a period where one copy / shard
> is missing".
>
> ----- Original Message -----
> From: "Paul Emmerich" <paul.emmerich@xxxxxxxx>
> To: "Fyodor Ustinov" <ufm@xxxxxx>
> Cc: "David Turner" <drakonstein@xxxxxxxxx>, "ceph-users" <ceph-users@xxxxxxxxxxxxxx>
> Sent: Friday, 1 March, 2019 11:32:54
> Subject: Re:  Right way to delete OSD from cluster?
>
> On Fri, Mar 1, 2019 at 8:55 AM Fyodor Ustinov <ufm@xxxxxx> wrote:
> >
> > Hi!
> >
> > Yes. But I am a little surprised by what is written in the documentation:
>
> the point of this is that you don't have a period where one copy/shard
> is missing if you wait for it to take it out.
> Yeah, there'll be an unnecessary small data movement afterwards, but
> you are never missing a copy.
>
>
> Paul
>
> > http://docs.ceph.com/docs/mimic/rados/operations/add-or-rm-osds/
> >
> > ---
> > Before you remove an OSD, it is usually up and in. You need to take it out of the cluster so that Ceph can begin rebalancing and copying its data to other OSDs.
> > ceph osd out {osd-num}
> > [...]
> > ---
> >
> > That is, it is argued that this is the most correct way (otherwise it would not have been written in the documentation).
> >
> >
> >
> > ----- Original Message -----
> > From: "David Turner" <drakonstein@xxxxxxxxx>
> > To: "Fyodor Ustinov" <ufm@xxxxxx>
> > Cc: "Scottix" <scottix@xxxxxxxxx>, "ceph-users" <ceph-users@xxxxxxxxxxxxxx>
> > Sent: Friday, 1 March, 2019 05:13:27
> > Subject: Re:  Right way to delete OSD from cluster?
> >
> > The reason is that an osd still contributes to the host weight in the crush
> > map even while it is marked out. When you out and then purge, the purging
> > operation removed the osd from the map and changes the weight of the host
> > which changes the crush map and data moves. By weighting the osd to 0.0,
> > the hosts weight is already the same it will be when you purge the osd.
> > Weighting to 0.0 is definitely the best option for removing storage if you
> > can trust the data on the osd being removed.
> >
> > On Tue, Feb 26, 2019, 3:19 AM Fyodor Ustinov <ufm@xxxxxx> wrote:
> >
> > > Hi!
> > >
> > > Thank you so much!
> > >
> > > I do not understand why, but your variant really causes only one rebalance
> > > compared to the "osd out".
> > >
> > > ----- Original Message -----
> > > From: "Scottix" <scottix@xxxxxxxxx>
> > > To: "Fyodor Ustinov" <ufm@xxxxxx>
> > > Cc: "ceph-users" <ceph-users@xxxxxxxxxxxxxx>
> > > Sent: Wednesday, 30 January, 2019 20:31:32
> > > Subject: Re:  Right way to delete OSD from cluster?
> > >
> > > I generally have gone the crush reweight 0 route
> > > This way the drive can participate in the rebalance, and the rebalance
> > > only happens once. Then you can take it out and purge.
> > >
> > > If I am not mistaken this is the safest.
> > >
> > > ceph osd crush reweight <id> 0
> > >
> > > On Wed, Jan 30, 2019 at 7:45 AM Fyodor Ustinov <ufm@xxxxxx> wrote:
> > > >
> > > > Hi!
> > > >
> > > > But unless after "ceph osd crush remove" I will not got the undersized
> > > objects? That is, this is not the same thing as simply turning off the OSD
> > > and waiting for the cluster to be restored?
> > > >
> > > > ----- Original Message -----
> > > > From: "Wido den Hollander" <wido@xxxxxxxx>
> > > > To: "Fyodor Ustinov" <ufm@xxxxxx>, "ceph-users" <
> > > ceph-users@xxxxxxxxxxxxxx>
> > > > Sent: Wednesday, 30 January, 2019 15:05:35
> > > > Subject: Re:  Right way to delete OSD from cluster?
> > > >
> > > > On 1/30/19 2:00 PM, Fyodor Ustinov wrote:
> > > > > Hi!
> > > > >
> > > > > I thought I should first do "ceph osd out", wait for the end
> > > relocation of the misplaced objects and after that do "ceph osd purge".
> > > > > But after "purge" the cluster starts relocation again.
> > > > >
> > > > > Maybe I'm doing something wrong? Then what is the correct way to
> > > delete the OSD from the cluster?
> > > > >
> > > >
> > > > You are not doing anything wrong, this is the expected behavior. There
> > > > are two CRUSH changes:
> > > >
> > > > - Marking it out
> > > > - Purging it
> > > >
> > > > You could do:
> > > >
> > > > $ ceph osd crush remove osd.X
> > > >
> > > > Wait for all good
> > > >
> > > > $ ceph osd purge X
> > > >
> > > > The last step should then not initiate any data movement.
> > > >
> > > > Wido
> > > >
> > > > > WBR,
> > > > >     Fyodor.
> > > > > _______________________________________________
> > > > > ceph-users mailing list
> > > > > ceph-users@xxxxxxxxxxxxxx
> > > > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> > > > >
> > > > _______________________________________________
> > > > ceph-users mailing list
> > > > ceph-users@xxxxxxxxxxxxxx
> > > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> > >
> > >
> > >
> > > --
> > > T: @Thaumion
> > > IG: Thaumion
> > > Scottix@xxxxxxxxx
> > > _______________________________________________
> > > ceph-users mailing list
> > > ceph-users@xxxxxxxxxxxxxx
> > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> > >
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@xxxxxxxxxxxxxx
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux