Re: Why you might want packages not containers for Ceph deployments

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I totally agree - we use a management system to manage all our Linux
machines. Adding containers just makes that a lot more complex,
especially since our management system does not support containers.
Regards
magnus

On Wed, 2021-06-02 at 10:36 +0100, Matthew Vernon wrote:
> This email was sent to you by someone outside the University.
> You should only click on links or attachments if you are certain that
> the email is genuine and the content is safe.
>
> Hi,
>
> In the discussion after the Ceph Month talks yesterday, there was a
> bit
> of chat about cephadm / containers / packages. IIRC, Sage observed
> that
> a common reason in the recent user survey for not using cephadm was
> that
> it only worked on containerised deployments. I think he then went on
> to
> say that he hadn't heard any compelling reasons why not to use
> containers, and suggested that resistance was essentially a user
> education question[0].
>
> I'd like to suggest, briefly, that:
>
> * containerised deployments are more complex to manage, and this is
> not
> simply a matter of familiarity
> * reducing the complexity of systems makes admins' lives easier
> * the trade-off of the pros and cons of containers vs packages is not
> obvious, and will depend on deployment needs
> * Ceph users will benefit from both approaches being supported into
> the
> future
>
> We make extensive use of containers at Sanger, particularly for
> scientific workflows, and also for bundling some web apps (e.g.
> Grafana). We've also looked at a number of container runtimes
> (Docker,
> singularity, charliecloud). They do have advantages - it's easy to
> distribute a complex userland in a way that will run on (almost) any
> target distribution; rapid "cloud" deployment; some separation (via
> namespaces) of network/users/processes.
>
> For what I think of as a 'boring' Ceph deploy (i.e. install on a set
> of
> dedicated hardware and then run for a long time), I'm not sure any of
> these benefits are particularly relevant and/or compelling - Ceph
> upstream produce Ubuntu .debs and Canonical (via their Ubuntu Cloud
> Archive) provide .debs of a couple of different Ceph releases per
> Ubuntu
> LTS - meaning we can easily separate out OS upgrade from Ceph
> upgrade.
> And upgrading the Ceph packages _doesn't_ restart the daemons[1],
> meaning that we maintain control over restart order during an
> upgrade.
> And while we might briefly install packages from a PPA or similar to
> test a bugfix, we roll those (test-)cluster-wide, rather than trying
> to
> run a mixed set of versions on a single cluster - and I understand
> this
> single-version approach is best practice.
>
> Deployment via containers does bring complexity; some examples we've
> found at Sanger (not all Ceph-related, which we run from packages):
>
> * you now have 2 process supervision points - dockerd and systemd
> * docker updates (via distribution unattended-upgrades) have an
> unfortunate habit of rudely restarting everything
> * docker squats on a chunk of RFC 1918 space (and telling it not to
> can
> be a bore), which coincides with our internal network...
> * there is more friction if you need to look inside containers
> (particularly if you have a lot running on a host and are trying to
> find
> out what's going on)
> * you typically need to be root to build docker containers (unlike
> packages)
> * we already have package deployment infrastructure (which we'll need
> regardless of deployment choice)
>
> We also currently use systemd overrides to tweak some of the Ceph
> units
> (e.g. to do some network sanity checks before bringing up an OSD),
> and
> have some tools to pair OSD / journal / LVM / disk device up; I think
> these would be more fiddly in a containerised deployment. I'd accept
> that fixing these might just be a SMOP[2] on our part.
>
> Now none of this is show-stopping, and I am most definitely not
> saying
> "don't ship containers". But I think there is added complexity to
> your
> deployment from going the containers route, and that is not simply a
> "learn how to use containers" learning curve. I do think it is
> reasonable for an admin to want to reduce the complexity of what
> they're
> dealing with - after all, much of my job is trying to automate or
> simplify the management of complex systems!
>
> I can see from a software maintainer's point of view that just
> building
> one container and shipping it everywhere is easier than building
> packages for a number of different distributions (one of my other
> hats
> is a Debian developer, and I have a bunch of machinery for doing this
> sort of thing). But it would be a bit unfortunate if the general
> thrust
> of "let's make Ceph easier to set up and manage" was somewhat
> derailed
> with "you must use containers, even if they make your life harder".
>
> I'm not going to criticise anyone who decides to use a container-
> based
> deployment (and I'm sure there are plenty of setups where it's an
> obvious win), but if I were advising someone who wanted to set up and
> use a 'boring' Ceph cluster for the medium term, I'd still advise on
> using packages. I don't think this makes me a luddite :)
>
> Regards, and apologies for the wall of text,
>
> Matthew
>
> [0] I think that's a fair summary!
> [1] This hasn't always been true...
> [2] Simple (sic.) Matter of Programming
>
>
> --
>  The Wellcome Sanger Institute is operated by Genome Research
>  Limited, a charity registered in England with number 1021457 and a
>  company registered in England with number 2742969, whose registered
>  office is 215 Euston Road, London, NW1 2BE.
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. Is e buidheann carthannais a th’ ann an Oilthigh Dhùn Èideann, clàraichte an Alba, àireamh clàraidh SC005336.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux