Re: Luminous: osd_crush_location_hook renamed to crush_location_hook

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> Op 19 oktober 2017 om 15:34 schreef Sage Weil <sage@xxxxxxxxxxxx>:
> 
> 
> On Thu, 19 Oct 2017, Dan van der Ster wrote:
> > Hi Wido,
> > 
> > Unexpected crush location changes can indeed be quite nasty.
> > 
> > With this in mind, I wonder if a crush lock would be useful.
> > 
> >     ceph osd set nocrushchange
> > 
> > With that flag set, osds could still go in and out, but crush
> > move/add/remove/etc..., also tunables changes, would be blocked.
> 
> The problem I see with this is that it would prevent new OSD additions or 
> other changes.. and if you went to unset the flag in order to allow a new 
> node to be adjusted or brought online then you might get an avalance of 
> blocked changes.
> 
> I think what we actually want is a more targetted variation of 
> osd_crush_update_on_start that only updates teh location if it has never 
> been set (i.e., it is a new osd).  Like, osd_crush_update_on_create.  Then 
> it's left to the admin to move OSDs?
> 
> > On Mon, Oct 16, 2017 at 2:02 PM, Wido den Hollander <wido@xxxxxxxx> wrote:
> > > Hi,
> > >
> > > I completely overread this, but I just found out that osd_crush_location_hook was renamed to crush_location_hook in the new config style.
> > >
> > > When upgrading from Jewel to Luminous without touching your configuration OSDs will move to the default CRUSH location due to the hook not being executed.
> > >
> > > Was this an oversight with Luminous or was it intentional?
> 
> The implications are an oversight.. I didn't think about customized hooks 
> that would get reverted if the config option wasn't changed.  Otherwise 
> the item in the release notes would have read more like a warning:
> 
> * The `osd crush location` config option is no longer supported.  Please
>   update your ceph.conf to use the `crush location` option instead.
> 

Yes, we probably need a big warning for this in the ReleaseNotes.

Are you going to make sure it goes in?

Wido

> Sorry about that!
> 
> sage
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux