Those are reasonable objections, although some are now dated. In the
context of Ceph some of those issues are also further addressed by Ceph
itself. So let me present my take.
1. Networking. You can set up some gnarly virtual networks in both
container and cloud systems, it's true. Docker has changed some of its
original rules as well but more on that in a bit. In my non-ceph
containers I've simply used host networking, which has its drawbacks but
it's simple and it's all I require.
2. Logs in containerized ceph almost all go straight to the system
journal. Specialized subsystems such as Prometheus can be configured in
other ways, but everything's filed under
/var/lib/ceph/<fsid>/<subsystem> so there's relatively little confusion.
3. I don't understand this, as I never stop all services just to play
with firewalls. RHEL 8+ support firewall-cmd and you can can open ceph
up with --add-service ceph, -add-service ceph-mon. Make them --permanent
and do a --reload and it's all done.
4. Ceph knows exactly the names and locations of its containers (NOTE: a
"package" is NOT a "container"). Within Ceph, almost all services
actually employ the same container, just with different invocation
options. You don't talk to "Docker*" directly, though, as systemd
handles that.
5. I've never encountered this, so I can say nothing. But I run
containers for about 6 different build base systems 24x7.
6. As I said, Ceph does almost everything via cephadm or ceph orch when
running in containers, which actually means you need to learn less.
Administration of ceph itself, is, again, done via systemd.
*Docker. As I've said elsewhere, Red Hat prefers Podman to Docker these
days and even if you install Docker, there's a Podman transparency
feature. Now if you really want networking headaches, run Podman
containers rootless. I've learned how to account for the differences but
Ceph, fortunately hasn't gone that route so far. Nor have they
instituted private networks for Ceph internal controls.
On 9/1/24 15:54, Anthony D'Atri wrote:
* Docker networking is a hassle
* Not always clear how to get logs
* Not being able to update iptables without stopping all services
* Docker package management when the name changes at random
* Docker core leaks and kernel compatibility
* When someone isn’t already using containers, or has their own orchestration, going to containers steepens the learning curve.
Containers have advantages including decoupling the applications from the underlying OS
I would greatly like to know what the rationale is for avoiding containers
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx