Re: Why you might want packages not containers for Ceph deployments

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



We have no one currently using containers for anything. *Therefore, we run old CEPH code to avoid them. If there was an option to not do containers on modern CEPH, that would be better for alot of people who don't want them.

-Ed

On 6/7/2021 2:54 AM, Eneko Lacunza wrote:
Hi Marc,

El 4/6/21 a las 16:39, Marc escribió:
Do you use rbd images in containers that are residing on osd nodes? Does this give any problems? I used to have kernel mounted cephfs on a osd node, after a specific luminous release this was giving me problems.
No, we use Ceph for VM storage. Some of the VMs host containers.

Cheers


-----Original Message-----
From: Eneko Lacunza <elacunza@xxxxxxxxx>
Sent: Friday, 4 June 2021 15:49
To: ceph-users@xxxxxxx
Subject: *****SPAM*****  Re: Why you might want packages
not containers for Ceph deployments

Hi,

We operate a few Ceph hyperconverged clusters with Proxmox, that
provides a custom ceph package repository. They do a great work; and
deployment is a brezee.

So, even as currently we would rely on Proxmox packages/distribution and
not upstream, we have a number of other projects deployed with
containers and we even distribute some of our own development in deb and
container packages, so I will comment on our view:

El 2/6/21 a las 23:26, Oliver Freyermuth escribió:
[...]
If I operate services in containers built by developers, of course
this ensures the setup works, and dependencies are well tested, and
even upgrades work well — but it also means that,
at the end of the day, if I run 50 services in 50 different containers
from 50 different upstreams, I'll have up to 50 different versions of
OpenSSL floating around my production servers.
If a security issue is found in any of the packages used in all the
container images, I now need to trust the security teams of all the 50
developer groups building these containers
(and most FOSS projects won't have the ressources, understandably...),
instead of the one security team of the disto I use. And then, I also
have to re-pull all these containers, after finding out that a
security fix has become available.
Or I need to build all these containers myself, and effectively take
over the complete job, and have my own security team.

This may scale somewhat well, if you have a team of 50 people, and
every person takes care of one service. Containers are often your
friend in this case[1],
since it allows to isolate the different responsibilities along with
the service.

But this is rarely the case outside of industry, and especially not in
academics.
So the approach we chose for us is to have one common OS everywhere,
and automate all of our deployment and configuration management with
Puppet.
Of course, that puts is in one of the many corners out there, but it
scales extremely well to all services we operate,
and I can still trust the distro maintainers to keep the base OS safe
on all our servers, automate reboots etc.

For Ceph, we've actually seen questions about security issues already
on the list[0] (never answered AFAICT).
These are the two main issues I find with containers really:

- Keeping hosts uptodate is more complex (apt-get update+apt-get
dist-upgrade and also some kind of docker pull+docker
restart/docker-compose up ...). Much of the time the second part is not
standard (just deployed a Harbor service, upgrade is quite simple but I
have to know how to do it as it's speciffic, maintenance would be much
easier if it was packaged in Debian). I won't say it's more difficult,
but it will be more diverse and complex.

- Container image quality and security support quality, that will vary
from upstream to upstream. You have to research each of them to know
were they stand. A distro (specially a good one like Debian, Ubuntu,
RHEL or SUSE) has known, quality security support for the repositories.
They will even fix issues not fixed by upstream (o backport them to
distro's version...). This is more an upstream vs distro issue, really.

About debugging issues reported with Ceph containers, I think those are
things waiting for a fix: why are logs writen in container image (or an
ephemeral volume, I don't know really how is that done right now)
instead of an external name volume o a local mapped dir in /var/log/ceph ?

All that said, I think that it makes sense for an upstream project like
Ceph, to distribute container images, as it is the most generic way to
distribute (you can deploy on any system/distro supporting container
images) and eases development. But only distributing container images
could make more users depend on third party distribution (global or
specific distros), which would delay feeback/bugreport to upstream.

Cheers and thanks for the great work!

Eneko Lacunza
Zuzendari teknikoa | Director técnico
Binovo IT Human Project

Tel. +34 943 569 206 | https://www.binovo.es
Astigarragako Bidea, 2 - 2º izda. Oficina 10-11, 20180 Oiartzun

https://www.youtube.com/user/CANALBINOVO
https://www.linkedin.com/company/37269706/
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

Eneko Lacunza
Zuzendari teknikoa | Director técnico
Binovo IT Human Project

Tel. +34 943 569 206 | https://www.binovo.es
Astigarragako Bidea, 2 - 2º izda. Oficina 10-11, 20180 Oiartzun

https://www.youtube.com/user/CANALBINOVO
https://www.linkedin.com/company/37269706/
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

--
Thank you for your time,

Edward H. Kalk IV
Information Technology Dept.
Server Specialist
Datacenter Virtualization and Storage Systems
Socket Telecom, LLC.
2703 Clark Lane
Columbia, MO 65202
573-817-0000 or 800-socket3 X218
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux