I'm arriving late to this thread, but a few things stood out that I wanted to clarify. On Wed, Jun 2, 2021 at 4:28 PM Oliver Freyermuth <freyermuth@xxxxxxxxxxxxxxxxxx> wrote: > To conclude, I strongly believe there's no one size fits all here. > > That was why I was hopeful when I first heard about the Ceph orchestrator idea, when it looked to be planned out to be modular, > with the different tasks being implementable in several backends, so one could imagine them being implemented with containers, with classic SSH on bare-metal (i.e. ceph-deploy-like), ansible, rook or maybe others. > Sadly, it seems it ended up being "container-only". > Containers certainly have many uses, and we run thousands of them daily, but neither do they fit each and every existing requirement, > nor are they a magic bullet to solve all issues. The orchestrator layer is an abstraction of the tool(s) used to provision ceph. Cephadm is just one implementation of that abstraction (there is another implementation that talks to rook). The abstraction is not designed to be 'container-only,' although currently both implementations do use containers. To add orchestrator support for a traditional deployment using packages, we have two options: (1) implement a third orchestrator module that handles package installation etc, or (2) modify cephadm to handle both container-based and package-based deployments. I suspect that (2) is less work, but I'll be honest that the cephadm team isn't yet swayed by the anti-container arguments, so there would be some lobbying and discussion to be done first! For either option, there are also some minor orchestrator interface adjustments that maybe needed. Cephadm can deploy an arbitrary ceph version/container image for individual daemons, but with packages it's per-host. With rook it's a per-cluster option (rook currently handles any upgrade/downgrade), so there is already some ambiguity and cleanup opportunity here. sage _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx