Le 27/11/2017 à 14:36, Alfredo Deza a écrit : > For the upcoming Luminous release (12.2.2), ceph-disk will be > officially in 'deprecated' mode (bug fixes only). A large banner with > deprecation information has been added, which will try to raise > awareness. > > We are strongly suggesting using ceph-volume for new (and old) OSD > deployments. The only current exceptions to this are encrypted OSDs > and FreeBSD systems > > Encryption support is planned and will be coming soon to ceph-volume. > > A few items to consider: > > * ceph-disk is expected to be fully removed by the Mimic release > * Existing OSDs are supported by ceph-volume. They can be "taken over" [0] > * ceph-ansible already fully supports ceph-volume and will soon default to it > * ceph-deploy support is planned and should be fully implemented soon > > > [0] http://docs.ceph.com/docs/master/ceph-volume/simple/ > -- > To unsubscribe from this list: send the line "unsubscribe ceph-devel" in > the body of a message to majordomo@xxxxxxxxxxxxxxx > More majordomo info at http://vger.kernel.org/majordomo-info.html Is that possible to update the "add-or-rm-osds" documentation to have also the process with ceph-volume. That would help to the adoption. http://docs.ceph.com/docs/master/rados/operations/add-or-rm-osds/ This page should be updated as well with ceph-volume command. http://docs.ceph.com/docs/master/rados/operations/bluestore-migration/ Documentation (at least for master, maybe for luminous) should keep both options (ceph-disk and ceph-volume) but with a warning message to encourage people to use ceph-volume instead of ceph-disk. I agree with comments here that say changing the status of ceph-disk as deprecated in a minor release is not what I expect for a stable storage systems but I also understand the necessity to move forward with ceph-volume (and bluestore). I think keeping ceph-disk in mimic is necessary, even though there is no update, just for compatibility with old scripts. -- Yoann Moulin EPFL IC-IT _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com