Re: ceph-ansible installation error

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I should know to not feed the trolls, but here goes.  I was answering a question asked to the list, not arguing for or against containers.


> 2. Logs in containerized ceph almost all go straight to the system journal. Specialized subsystems such as Prometheus can be configured in other ways, but everything's filed under /var/lib/ceph/<fsid>/<subsystem> so there's relatively little confusion.

I see various other paths, which often aren’t /var/log. And don’t conflate “containerized Ceph” with “cephadm”.  There are lots of containerized deployments that don’t use cephadm / ceph orch.

> 3. I don't understand this, as I never stop all services just to play with firewalls. RHEL 8+ support firewall-cmd

Lots of people don’t run RHEL, and I did wrote “iptables”, not whatever obscure firewall system RHEL also happens to ship.

> 4. Ceph knows exactly the names and locations of its containers

Sometimes.  See above.

> (NOTE: a "package" is NOT a "container")

Nobody claimed otherwise.

> You don't talk to "Docker*" directly, though, as systemd handles that.

Not in my experience.  Docker is not Podman.  I have Ceph clusters *right now* that use Docker and do not have Podman installed.  They also aren’t RHEL.

> 6. As I said, Ceph does almost everything via cephadm

When deployed with cephadm.  You asked about containers, not about cephadm.  They are not fungible.

> or ceph orch when running in containers, which actually means you need to learn less.

You assume that everyone already knows how containers roll, including the subtle dynamics of /etc/ceph/ceph.conf being mapped to the container’s filesystem view and potentially containing option settings that perplexing unless one knows how to find and modify them.  That isn’t rue.  When someone doesn’t know the dynamics of containers, they can add to the learning curve.  And yes the docs do not yet pervasively cover the panoply of container scenarios.

> Administration of ceph itself, is, again, done via systemd.

Sorry, but that often isn’t the case.

> *Docker. As I've said elsewhere, Red Hat prefers Podman to Docker these days

Confused look.  I know people who prefer using vi or like brussell sprouts.  Those aren’t relevant to the question about containerized deployments either. And the question was re containers, not about the organization formerly known as Red Hat.

> and even if you install Docker, there's a Podman transparency feature.

See above.

> Now if you really want networking headaches, run Podman containers rootless. I've learned how to account for the differences but Ceph, fortunately hasn't gone that route so far. Nor have they instituted private networks for Ceph internal controls.
> 
> 
> On 9/1/24 15:54, Anthony D'Atri wrote:
>> * Docker networking is a hassle
>> * Not always clear how to get logs
>> * Not being able to update iptables without stopping all services
>> * Docker package management when the name changes at random
>> * Docker core leaks and kernel compatibility
>> * When someone isn’t already using containers, or has their own orchestration, going to containers steepens the learning curve.
>> 
>> Containers have advantages including decoupling the applications from the underlying OS
>> 
>>> I would greatly like to know what the rationale is for avoiding containers
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux