Re: ceph-disk is now deprecated

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Nov 28, 2017 at 9:22 AM, David Turner <drakonstein@xxxxxxxxx> wrote:
> Isn't marking something as deprecated meaning that there is a better option
> that we want you to use and you should switch to it sooner than later? I
> don't understand how this is ready to be marked as such if ceph-volume can't
> be switched to for all supported use cases. If ZFS, encryption, FreeBSD, etc
> are all going to be supported under ceph-volume, then how can ceph-disk be
> deprecated before ceph-volume can support them? I can imagine many Ceph
> admins wasting time chasing an erroneous deprecated warning because it came
> out before the new solution was mature enough to replace the existing
> solution.

There is no need to worry about this deprecation, Its mostly for
admins to be prepared
for the changes coming ahead and its mostly for *new* installations
that can plan on using ceph-volume which provides
great flexibility compared to ceph-disk.

a) many dont use ceph-disk or ceph-volume directly, so the tool you
have right now eg: ceph-deploy or ceph-ansible
will still support the ceph-disk, the previous ceph-deploy release is
still available from pypi
  https://pypi.python.org/pypi/ceph-deploy

b) also the current push will help anyone who is using ceph-deploy or
ceph-disk in scripts/chef/etc
   to have time to think about using newer cli based on ceph-volume


> On Tue, Nov 28, 2017 at 9:26 AM Willem Jan Withagen <wjw@xxxxxxxxxxx> wrote:
>>
>> On 28-11-2017 13:32, Alfredo Deza wrote:
>> >
>> > I understand that this would involve a significant effort to fully
>> > port over and drop ceph-disk entirely, and I don't think that dropping
>> > ceph-disk in Mimic is set in stone (yet).
>>
>> Alfredo,
>>
>> When I expressed my concers about deprecating ceph-disk, I was led to
>> beleive that I had atleast two release cycles to come up with something
>> of a 'ceph-volume zfs ....'
>>
>> Reading this, there is a possibility that it will get dropped IN mimic?
>> Which means that there is less than 1 release cycle to get it working?
>>
>> Thanx,
>> --WjW
>>
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux