Re: ceph-disk is now deprecated

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Nov 28, 2017 at 10:39:31AM -0800, Vasu Kulkarni wrote:
> On Tue, Nov 28, 2017 at 9:22 AM, David Turner <drakonstein@xxxxxxxxx> wrote:
> > Isn't marking something as deprecated meaning that there is a better option
> > that we want you to use and you should switch to it sooner than later? I
> > don't understand how this is ready to be marked as such if ceph-volume can't
> > be switched to for all supported use cases. If ZFS, encryption, FreeBSD, etc
> > are all going to be supported under ceph-volume, then how can ceph-disk be
> > deprecated before ceph-volume can support them? I can imagine many Ceph
> > admins wasting time chasing an erroneous deprecated warning because it came
> > out before the new solution was mature enough to replace the existing
> > solution.
> 
> There is no need to worry about this deprecation, Its mostly for
> admins to be prepared
> for the changes coming ahead and its mostly for *new* installations
> that can plan on using ceph-volume which provides
> great flexibility compared to ceph-disk.

changing existing installations to output deprecation warnings from one
minor release to the next means it is not just for new installations
though, no matter how you spin it. a mention in the release notes and
docs would be enough to get admins to test and use ceph-volume on new
installations.

I am pretty sure many admins will be bothered by all nodes running OSDs
spamming the logs and their terminals with huge deprecation warnings on
each OSD activation[1] or other actions involving ceph-disk, and having
this state for the remainder of Luminous unless they switch to a new
(and as of yet not battle-tested) way of activating their OSDs seems
crazy to me.

I know our users will be, and given the short notice and huge impact
this would have we will likely have to remove the deprecation warnings
altogether in our (downstream) packages until we have completed testing
of and implementing support for ceph-volume..

> 
> a) many dont use ceph-disk or ceph-volume directly, so the tool you
> have right now eg: ceph-deploy or ceph-ansible
> will still support the ceph-disk, the previous ceph-deploy release is
> still available from pypi
>   https://pypi.python.org/pypi/ceph-deploy

we have >> 10k (user / customer managed!) installations on Ceph Luminous
alone, all using our wrapper around ceph-disk - changing something like
this in the middle of a release causes huge headaches for downstreams
like us, and is not how a stable project is supposed to be run.

> 
> b) also the current push will help anyone who is using ceph-deploy or
> ceph-disk in scripts/chef/etc
>    to have time to think about using newer cli based on ceph-volume

a regular deprecate at the beginning of the release cycle were the
replacement is deemed stable, remove in the next release cycle would be
adequate for this purpose.

I don't understand the rush to shoe-horn ceph-volume into existing
supposedly stable Ceph installations at all - especially given the
current state of ceph-volume (we'll file bugs once we are done writing
them up, but a quick rudimentary test already showed stuff like choking
on valid ceph.conf files because they contain leading whitespace and
incomplete error handling leading to crush map entries for failed OSD
creation attempts).

I DO understand the motivation behind ceph-volume and the desire to get
rid of the udev-based trigger mess, but the solution is not to scare
users into switching in the middle of a release by introducing
deprecation warnings for a core piece of the deployment stack.

IMHO the only reason to push or force such a switch in this manner would
be a (grave) security or data corruption bug, which is not the case at
all here..

1: have you looked at the journal / boot logs of a mid-sized OSD node
using ceph-disk for activation with the deprecation warning active?  if
my boot log is suddenly filled with 20% warnings, my first reaction will
be that something is very wrong.. my likely second reaction when
realizing what is going on is probably not fit for posting to a public
mailing list ;)

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux