Re: ceph-disk is now deprecated

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,
Thanks for the heads-up. As someone who's currently maintaining a
Jewel cluster and are in the process of setting up a shiny new
Luminous cluster and writing Ansible roles in the process to make
setup reproducible. I immediately proceeded to look into ceph-volume
and I've some questions/concerns, mainly due to my own setup, which is
one osd per device, simple.

Running ceph-volume in Luminous 12.2.1 suggests there's only the lvm
subcommand available and the man-page only covers lvm. The online
documentation http://docs.ceph.com/docs/master/ceph-volume/ lists
simple however it's lacking some of the ceph-disk commands, like
'prepare' which seems crucial in the 'simple' scenario. Does the
ceph-disk deprecation imply that lvm is mandatory for using devices
with ceph or is just the documentation and tool features lagging
behind, I.E the 'simple' parts will be added well in time for Mimic
and during the Luminous lifecycle? Or am I missing something?

Best regards,
Andreas

On 27 November 2017 at 14:36, Alfredo Deza <adeza@xxxxxxxxxx> wrote:
> For the upcoming Luminous release (12.2.2), ceph-disk will be
> officially in 'deprecated' mode (bug fixes only). A large banner with
> deprecation information has been added, which will try to raise
> awareness.
>
> We are strongly suggesting using ceph-volume for new (and old) OSD
> deployments. The only current exceptions to this are encrypted OSDs
> and FreeBSD systems
>
> Encryption support is planned and will be coming soon to ceph-volume.
>
> A few items to consider:
>
> * ceph-disk is expected to be fully removed by the Mimic release
> * Existing OSDs are supported by ceph-volume. They can be "taken over" [0]
> * ceph-ansible already fully supports ceph-volume and will soon default to it
> * ceph-deploy support is planned and should be fully implemented soon
>
>
> [0] http://docs.ceph.com/docs/master/ceph-volume/simple/
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux