Re: ceph-ansible installation error

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Sorry if that sounds trollish. It wasn't intended to be. Look at it this way.

There are two approaches to running an IT installation. One is the free-wheeling idependent aproach. The other is the stuffy corporate approach.

Free-wheeling shops run things like Ubuntu. Or even BSD (but that's another matter). They often roll their own solutions. Frequently they were set up by a resident genius who liked some particular platform. And more power to them. Whether they do or don't buy into containerization is basically going to be determined on whether their guiding genius(es) take an interest in such things.

They can ignore the points I just made. Said points were aimed at the stuffy business crowd. All that remains to be said on containers for that topic is that as far as I know Ceph never promoted setting up containerization manually.

Now Ubuntu tries very hard to support the Red Hat Enterprise products, but there are certain quirks to the Red Hat world that make it extra challenging. Literally from the very start. Kickstart and preseed are not directly compatible, and in general, the RPM-based distros have one crucial difference from the Debian world in that at no time does an RPM ever halt for interactive input. So the non-Red Hat folks do have to work a little harder. Ceph, I believe does have some serious Ubuntu support, but at its core it is a Red Hat/IBM product. And IBM Red Hat thus has significant control over its shaping and direction.

So, for the stuffy business crowd. Banks, insurance companies, Fortune corporations, as opposed to public institutions (where they're more likely to be free-wheeling).

These organizations generally pay for Red Hat contracts (bless them, Red Hat stock paid for my wife's retirement). They want the "neck to choke", they want someone to cry to (besides this forum!). They want access to the restricted Red Hat Q&A. They're generally going to install RHEL or at a minimum a CentOS-style equivalent. And they're going to go with what their paid support tells them to do.

As of Ceph Octopus, the older various methods of deploying Ceph have apparently been deprecated in favor of cephadm, and I expect that when you call for support, they're going to be hoping that's what your using. The cephadm utility collapses a LOT of previous manual work down into a central control program. It also (as far as I know, has no support for the legacy Ceph structure, being entirely based on "administered" (e.g. containerized) servers. Rather like virsh handles almost everything for KVM-based VMs, cephadm is the only program you have to install to ceph-enable a real or virtual machine.

The fact that "administered" daemons are container-based is actually incidental, as from the Ceph administrator's point of view, the daemons are black boxes. It does not support roll-your-own Ceph containers; it uses its own and everything is controlled either via cephadm or systemd. Being a container expert is neither required nor expected. And could even potentially get you into trouble (not that I'd ever have had that problem...).

The more critical point is that ceph has abandoned older install/maintenance processes. Ansible is no longer supported, and as I said, even though the Octopus docs give examples using ceph-install, that utility is not available from the repository install of Octopus. So at this point, it appears to be a choice between keeping legacy structure and doing everything the hard (manual) way or using cephadm. There is a serious possibility that eventually that choice may no longer exist.

Aside from making administration simpler overall, if you want to run multiple fsids on a single machine, the only simple way to do that is via cephadm. Incidentally, the Ceph config does still remain in /etc/ceph and is essentially unchanged, allowing for the items that now get stored in object store instead of in text configs.


A final note. The cephadm utility also can convert legacy resources into administered resources with a single command, thus easing migration. You can have both legacy and administered resources concurrently on the same machine so migrations can be gradual and outages are minimized.

Ultimately, the choice is yours, for now. But it wouldn't be a bad idea to set up a test cephadm-based system just to get familiar with the concept. Avoid Octopus, though. It had some serious teething issues, as I found to my pain.

  Tim

On 9/2/24 11:41, Anthony D'Atri wrote:
I should know to not feed the trolls, but here goes.  I was answering a question asked to the list, not arguing for or against containers.


2. Logs in containerized ceph almost all go straight to the system journal. Specialized subsystems such as Prometheus can be configured in other ways, but everything's filed under /var/lib/ceph/<fsid>/<subsystem> so there's relatively little confusion.
I see various other paths, which often aren’t /var/log. And don’t conflate “containerized Ceph” with “cephadm”.  There are lots of containerized deployments that don’t use cephadm / ceph orch.

3. I don't understand this, as I never stop all services just to play with firewalls. RHEL 8+ support firewall-cmd
Lots of people don’t run RHEL, and I did wrote “iptables”, not whatever obscure firewall system RHEL also happens to ship.

4. Ceph knows exactly the names and locations of its containers
Sometimes.  See above.

(NOTE: a "package" is NOT a "container")
Nobody claimed otherwise.

You don't talk to "Docker*" directly, though, as systemd handles that.
Not in my experience.  Docker is not Podman.  I have Ceph clusters *right now* that use Docker and do not have Podman installed.  They also aren’t RHEL.

6. As I said, Ceph does almost everything via cephadm
When deployed with cephadm.  You asked about containers, not about cephadm.  They are not fungible.

or ceph orch when running in containers, which actually means you need to learn less.
You assume that everyone already knows how containers roll, including the subtle dynamics of /etc/ceph/ceph.conf being mapped to the container’s filesystem view and potentially containing option settings that perplexing unless one knows how to find and modify them.  That isn’t rue.  When someone doesn’t know the dynamics of containers, they can add to the learning curve.  And yes the docs do not yet pervasively cover the panoply of container scenarios.

Administration of ceph itself, is, again, done via systemd.
Sorry, but that often isn’t the case.

*Docker. As I've said elsewhere, Red Hat prefers Podman to Docker these days
Confused look.  I know people who prefer using vi or like brussell sprouts.  Those aren’t relevant to the question about containerized deployments either. And the question was re containers, not about the organization formerly known as Red Hat.

and even if you install Docker, there's a Podman transparency feature.
See above.

Now if you really want networking headaches, run Podman containers rootless. I've learned how to account for the differences but Ceph, fortunately hasn't gone that route so far. Nor have they instituted private networks for Ceph internal controls.


On 9/1/24 15:54, Anthony D'Atri wrote:
* Docker networking is a hassle
* Not always clear how to get logs
* Not being able to update iptables without stopping all services
* Docker package management when the name changes at random
* Docker core leaks and kernel compatibility
* When someone isn’t already using containers, or has their own orchestration, going to containers steepens the learning curve.

Containers have advantages including decoupling the applications from the underlying OS

I would greatly like to know what the rationale is for avoiding containers
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux