Re: Why you might want packages not containers for Ceph deployments

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, Jun 19, 2021 at 3:43 PM Nico Schottelius
<nico.schottelius@xxxxxxxxxxx> wrote:
> Good evening,
>
> as an operator running Ceph clusters based on Debian and later Devuan
> for years and recently testing ceph in rook, I would like to chime in to
> some of the topics mentioned here with short review:
>
> Devuan/OS package:
>
> - Over all the years changing from Debian to Devuan, changing the Devuan
>   versions, dist-upgrading - we did not encounter a single issues on the
>   OS basis. The only real problems where, when ceph version
>   incompatibilities between the major versions happened. However this
>   will not change with containers.
>
>   I do see the lack of proper packages for Alpine Linux, which would be
>   an amazing lean target for running ceph.
>
>   The biggest problem I see is that ceph/cephadm is the longer the more
>   relying on systemd and that actually locks out folks.

I want to reiterate that while cephadm requirements are
systemd+lvm+python3+containers, the orchestration framework does not
have any of these limitations, and is designed to allow you to plug in
other options.

> [...]
>
> Thus my suggestion for the ceph team is to focus on 2 out of the three
> variants:
>
> - Keep providing a native, even manual deployment mode. Let people get
>   an understanding of ceph, develop even their own tooling around it.
>   This enables distros, SMEs, Open Source communities, hackers,
>   developers. Low entrance barrier, easy access, low degree of
>   automation.
>
> - For those who are into containers, advise them how to embrace k8s. How
>   to use k8s on bare metal. Is it potentially even smarter to run ceph
>   on IPv6 only clusters? What does the architecture look like with k8s?
>   How does rook do autodetection, what metrics can the kube-prometheus
>   grafana help with? etc. etc. The whole shebang that you'll need to
>   develop and create over time anyway.

Cephadm is intended to be the primary non-k8s option, since it seems
pretty clear that there is a significant (huge?) portion of the user
commuity that is not interested in adding kubernetes underneath their
storage (take all of the "containers add complex" arguments and *
100).  We used containers because, in our view, it simplified the
developer AND user experience.

But neither rook nor cephadm preclude deploying Ceph the traditional
way.  The newer capabilities in the dashboard to manage the deployment
of Ceph relies on the orchestrator API, so a traditional deployment
today cannot make use of these new features, but nothing is preventing
a non-container-based orchestrator implementation.

sage
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux