Re: Why you might want packages not containers for Ceph deployments

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



My Cephadm deployment on RHEL8 created a service for each container, complete with restarts.  And on the host, the processes run under the 'ceph' user account.

The biggest issue I had with running as containers is that the unit.run script generated runs podman -rm ...   with the -rm, the logs are removed when there is an issue, so it took extra effort to figure out what the issue I was having on one machine was. (Although that turned out to be a bad memory chip, which would have manifested itself anyway).

As a manager of a team who develops microservices in containers, I have a mixed attitude towards them - the ability to know the version of Java, and support libraries deployed for a given container, and isolate updates one image at a time can be a bonus, but with my client's CI/CD pipeline and how they want our containers to be built, simple tasks like upgrading a package version, upgrading the version of Java or even a simple replacement of a certificate has become significantly more difficult, because we need to rebuild all of the containers and go through the QA processes rather than update a cert.

For my usage, (At home running Ceph on older hardware I converted to servers) I don't want to have to care about ceph dependencies and also isolate things from other things running on the server, so a container infrastructure works well, but I can see where packages can be much better in a well maintained server infrastructure.



-----Original Message-----
From: Sasha Litvak <alexander.v.litvak@xxxxxxxxx> 
Sent: Thursday, June 3, 2021 3:57 AM
To: ceph-users <ceph-users@xxxxxxx>
Subject:  Re: Why you might want packages not containers for Ceph deployments

Podman containers will not restart due to restart or failure of centralized podman daemon.  Container is not synonymous to Docker.  This thread reminds me systemd haters threads more and more by I guess it is fine.

On Thu, Jun 3, 2021, 2:16 AM Marc <Marc@xxxxxxxxxxxxxxxxx> wrote:

> Not using cephadm, I would also question other things like:
>
> - If it uses docker and docker daemon fails what happens to you containers?
> - I assume the ceph-osd containers need linux capability sysadmin. So 
> if you have to allow this via your OC, all your tasks have potentially 
> access to this permission. (That is why I chose not to allow the OC 
> access to it)
> - cephadm only runs with docker?
>
>
>
>
> > -----Original Message-----
> > From: Martin Verges <martin.verges@xxxxxxxx>
> > Sent: 02 June 2021 13:29
> > To: Matthew Vernon <mv3@xxxxxxxxxxxx>
> > Cc: ceph-users@xxxxxxx
> > Subject:  Re: Why you might want packages not containers 
> > for Ceph deployments
> >
> > Hello,
> >
> > I agree to Matthew, here at croit we work a lot with containers all 
> > day long. No problem with that and enough knowledge to say for sure 
> > it's not about getting used to it.
> > For us and our decisions here, Storage is the most valuable piece of 
> > IT equipment in a company. If you have problems with your storage, 
> > most likely you have a huge pain, costs, problems, downtime, 
> > whatever. Therefore, your storage solution must be damn simple, you 
> > switch it on, it has to work.
> >
> > If you take a short look into Ceph documentation about how to deploy 
> > a cephadm cluster vs croit. We strongly believe it's much easier as 
> > we take away all the pain from OS up to Ceph while keeping it simple 
> > behind the scene. You still can always login to a node, kill a 
> > process, attach some strace or whatever you like as you know it from 
> > years of linux administration without any complexity layers like 
> > docker/podman/... It's just friction less. In the end, what do you 
> > need? A kernel, an initramfs, some systemd, a bit of libs and 
> > tooling, and the Ceph packages.
> >
> > In addition, we help lot's of Ceph users on a regular basis with 
> > their hand made setups, but we don't really wanna touch the cephadm 
> > ones, as they are often harder to debug. But of course we do it 
> > anyways :).
> >
> > To have a perfect storage, strip away anything unneccessary. Avoid 
> > any complexity, avoid anything that might affect your system. Keep 
> > it simply stupid.
> >
> > --
> > Martin Verges
> > Managing director
> >
> > Mobile: +49 174 9335695
> > E-Mail: martin.verges@xxxxxxxx
> > Chat: https://t.me/MartinVerges
> >
> > croit GmbH, Freseniusstr. 31h, 81247 Munich
> > CEO: Martin Verges - VAT-ID: DE310638492 Com. register: Amtsgericht 
> > Munich HRB 231263
> >
> > Web: https://croit.io
> > YouTube: https://goo.gl/PGE1Bx
> >
> >
> > On Wed, 2 Jun 2021 at 11:38, Matthew Vernon <mv3@xxxxxxxxxxxx> wrote:
> >
> > > Hi,
> > >
> > > In the discussion after the Ceph Month talks yesterday, there was 
> > > a
> > bit
> > > of chat about cephadm / containers / packages. IIRC, Sage observed
> > that
> > > a common reason in the recent user survey for not using cephadm 
> > > was
> > that
> > > it only worked on containerised deployments. I think he then went 
> > > on
> > to
> > > say that he hadn't heard any compelling reasons why not to use 
> > > containers, and suggested that resistance was essentially a user 
> > > education question[0].
> > >
> > > I'd like to suggest, briefly, that:
> > >
> > > * containerised deployments are more complex to manage, and this 
> > > is
> > not
> > > simply a matter of familiarity
> > > * reducing the complexity of systems makes admins' lives easier
> > > * the trade-off of the pros and cons of containers vs packages is 
> > > not obvious, and will depend on deployment needs
> > > * Ceph users will benefit from both approaches being supported 
> > > into
> > the
> > > future
> > >
> > > We make extensive use of containers at Sanger, particularly for 
> > > scientific workflows, and also for bundling some web apps (e.g.
> > > Grafana). We've also looked at a number of container runtimes 
> > > (Docker, singularity, charliecloud). They do have advantages - 
> > > it's easy to distribute a complex userland in a way that will run 
> > > on (almost) any target distribution; rapid "cloud" deployment; 
> > > some separation (via
> > > namespaces) of network/users/processes.
> > >
> > > For what I think of as a 'boring' Ceph deploy (i.e. install on a 
> > > set
> > of
> > > dedicated hardware and then run for a long time), I'm not sure any 
> > > of these benefits are particularly relevant and/or compelling - 
> > > Ceph upstream produce Ubuntu .debs and Canonical (via their Ubuntu 
> > > Cloud
> > > Archive) provide .debs of a couple of different Ceph releases per
> > Ubuntu
> > > LTS - meaning we can easily separate out OS upgrade from Ceph upgrade.
> > > And upgrading the Ceph packages _doesn't_ restart the daemons[1], 
> > > meaning that we maintain control over restart order during an upgrade.
> > > And while we might briefly install packages from a PPA or similar 
> > > to test a bugfix, we roll those (test-)cluster-wide, rather than 
> > > trying
> > to
> > > run a mixed set of versions on a single cluster - and I understand
> > this
> > > single-version approach is best practice.
> > >
> > > Deployment via containers does bring complexity; some examples 
> > > we've found at Sanger (not all Ceph-related, which we run from packages):
> > >
> > > * you now have 2 process supervision points - dockerd and systemd
> > > * docker updates (via distribution unattended-upgrades) have an 
> > > unfortunate habit of rudely restarting everything
> > > * docker squats on a chunk of RFC 1918 space (and telling it not 
> > > to
> > can
> > > be a bore), which coincides with our internal network...
> > > * there is more friction if you need to look inside containers 
> > > (particularly if you have a lot running on a host and are trying 
> > > to
> > find
> > > out what's going on)
> > > * you typically need to be root to build docker containers (unlike
> > > packages)
> > > * we already have package deployment infrastructure (which we'll 
> > > need regardless of deployment choice)
> > >
> > > We also currently use systemd overrides to tweak some of the Ceph
> > units
> > > (e.g. to do some network sanity checks before bringing up an OSD), 
> > > and have some tools to pair OSD / journal / LVM / disk device up; 
> > > I think these would be more fiddly in a containerised deployment. 
> > > I'd accept that fixing these might just be a SMOP[2] on our part.
> > >
> > > Now none of this is show-stopping, and I am most definitely not 
> > > saying "don't ship containers". But I think there is added 
> > > complexity to your deployment from going the containers route, and 
> > > that is not simply a "learn how to use containers" learning curve. 
> > > I do think it is reasonable for an admin to want to reduce the 
> > > complexity of what
> > they're
> > > dealing with - after all, much of my job is trying to automate or 
> > > simplify the management of complex systems!
> > >
> > > I can see from a software maintainer's point of view that just
> > building
> > > one container and shipping it everywhere is easier than building 
> > > packages for a number of different distributions (one of my other 
> > > hats is a Debian developer, and I have a bunch of machinery for 
> > > doing this sort of thing). But it would be a bit unfortunate if 
> > > the general
> > thrust
> > > of "let's make Ceph easier to set up and manage" was somewhat 
> > > derailed with "you must use containers, even if they make your life harder".
> > >
> > > I'm not going to criticise anyone who decides to use a 
> > > container-based deployment (and I'm sure there are plenty of 
> > > setups where it's an obvious win), but if I were advising someone 
> > > who wanted to set up and use a 'boring' Ceph cluster for the 
> > > medium term, I'd still advise on using packages. I don't think 
> > > this makes me a luddite :)
> > >
> > > Regards, and apologies for the wall of text,
> > >
> > > Matthew
> > >
> > > [0] I think that's a fair summary!
> > > [1] This hasn't always been true...
> > > [2] Simple (sic.) Matter of Programming
> > >
> > >
> > > --
> > >  The Wellcome Sanger Institute is operated by Genome Research  
> > > Limited, a charity registered in England with number 1021457 and a  
> > > company registered in England with number 2742969, whose 
> > > registered  office is 215 Euston Road, London, NW1 2BE.
> > > _______________________________________________
> > > ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send 
> > > an email to ceph-users-leave@xxxxxxx
> > >
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an 
> > email to ceph-users-leave@xxxxxxx
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an 
> email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux