Re: ceph-disk is now deprecated

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I tend to agree with Wido. May of us still reply on ceph-disk and hope to see it live a little longer.

Maged


On 2017-11-28 13:54, Alfredo Deza wrote:

On Tue, Nov 28, 2017 at 3:12 AM, Wido den Hollander <wido@xxxxxxxx> wrote:

Op 27 november 2017 om 14:36 schreef Alfredo Deza <adeza@xxxxxxxxxx>:


For the upcoming Luminous release (12.2.2), ceph-disk will be
officially in 'deprecated' mode (bug fixes only). A large banner with
deprecation information has been added, which will try to raise
awareness.


As much as I like ceph-volume and the work being done, is it really a good idea to use a minor release to deprecate a tool?

Can't we just introduce ceph-volume and deprecate ceph-disk at the release of M? Because when you upgrade to 12.2.2 suddenly existing integrations will have deprecation warnings being thrown at them while they haven't upgraded to a new major version.

ceph-volume has been present since the very first release of Luminous,
the deprecation warning in ceph-disk is the only "new" thing
introduced for 12.2.2.


As ceph-deploy doesn't support ceph-disk either I don't think it's a good idea to deprecate it right now.

ceph-deploy work is being done to support ceph-volume exclusively
(ceph-disk support is dropped fully), which will mean a change in its
API in a non-backwards compatible
way. A major version change in ceph-deploy, documentation, and a bunch
of documentation is being worked on to allow users to transition to
it.


How do others feel about this?

Wido

We are strongly suggesting using ceph-volume for new (and old) OSD
deployments. The only current exceptions to this are encrypted OSDs
and FreeBSD systems

Encryption support is planned and will be coming soon to ceph-volume.

A few items to consider:

* ceph-disk is expected to be fully removed by the Mimic release
* Existing OSDs are supported by ceph-volume. They can be "taken over" [0]
* ceph-ansible already fully supports ceph-volume and will soon default to it
* ceph-deploy support is planned and should be fully implemented soon


[0] http://docs.ceph.com/docs/master/ceph-volume/simple/
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux