Re: Why you might want packages not containers for Ceph deployments

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> 在 2021年6月4日,21:51,Eneko Lacunza <elacunza@xxxxxxxxx> 写道:
> 
> Hi,
> 
> We operate a few Ceph hyperconverged clusters with Proxmox, that provides a custom ceph package repository. They do a great work; and deployment is a brezee.
> 
> So, even as currently we would rely on Proxmox packages/distribution and not upstream, we have a number of other projects deployed with containers and we even distribute some of our own development in deb and container packages, so I will comment on our view:
> 
> El 2/6/21 a las 23:26, Oliver Freyermuth escribió:
> [...]
>> 
>> If I operate services in containers built by developers, of course this ensures the setup works, and dependencies are well tested, and even upgrades work well — but it also means that,
>> at the end of the day, if I run 50 services in 50 different containers from 50 different upstreams, I'll have up to 50 different versions of OpenSSL floating around my production servers.
>> If a security issue is found in any of the packages used in all the container images, I now need to trust the security teams of all the 50 developer groups building these containers
>> (and most FOSS projects won't have the ressources, understandably...),
>> instead of the one security team of the disto I use. And then, I also have to re-pull all these containers, after finding out that a security fix has become available.
>> Or I need to build all these containers myself, and effectively take over the complete job, and have my own security team.
>> 
>> This may scale somewhat well, if you have a team of 50 people, and every person takes care of one service. Containers are often your friend in this case[1],
>> since it allows to isolate the different responsibilities along with the service.
>> 
>> But this is rarely the case outside of industry, and especially not in academics.
>> So the approach we chose for us is to have one common OS everywhere, and automate all of our deployment and configuration management with Puppet.
>> Of course, that puts is in one of the many corners out there, but it scales extremely well to all services we operate,
>> and I can still trust the distro maintainers to keep the base OS safe on all our servers, automate reboots etc.
>> 
>> For Ceph, we've actually seen questions about security issues already on the list[0] (never answered AFAICT).
> 
> These are the two main issues I find with containers really:
> 
> - Keeping hosts uptodate is more complex (apt-get update+apt-get dist-upgrade and also some kind of docker pull+docker restart/docker-compose up ...). Much of the time the second part is not standard (just deployed a Harbor service, upgrade is quite simple but I have to know how to do it as it's speciffic, maintenance would be much easier if it was packaged in Debian). I won't say it's more difficult, but it will be more diverse and complex.
> 
> - Container image quality and security support quality, that will vary from upstream to upstream. You have to research each of them to know were they stand. A distro (specially a good one like Debian, Ubuntu, RHEL or SUSE) has known, quality security support for the repositories. They will even fix issues not fixed by upstream (o backport them to distro's version...). This is more an upstream vs distro issue, really.
> 
> About debugging issues reported with Ceph containers, I think those are things waiting for a fix: why are logs writen in container image (or an ephemeral volume, I don't know really how is that done right now) instead of an external name volume o a local mapped dir in /var/log/ceph ?

You could find the logs with “journalctl” outside of containers.

> All that said, I think that it makes sense for an upstream project like Ceph, to distribute container images, as it is the most generic way to distribute (you can deploy on any system/distro supporting container images) and eases development. But only distributing container images could make more users depend on third party distribution (global or specific distros), which would delay feeback/bugreport to upstream.
> 
> Cheers and thanks for the great work!
> 
> Eneko Lacunza
> Zuzendari teknikoa | Director técnico
> Binovo IT Human Project
> 
> Tel. +34 943 569 206 | https://apac01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.binovo.es%2F&amp;data=04%7C01%7C%7Ce9fd782948fc4ed9160a08d9275fe158%7C84df9e7fe9f640afb435aaaaaaaaaaaa%7C1%7C0%7C637584115021099979%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=npfYsR5uxQiLoMOMaDELSO0uh4%2Fx%2Bj02dMMkBrSA2G0%3D&amp;reserved=0
> Astigarragako Bidea, 2 - 2º izda. Oficina 10-11, 20180 Oiartzun
> 
> https://apac01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.youtube.com%2Fuser%2FCANALBINOVO&amp;data=04%7C01%7C%7Ce9fd782948fc4ed9160a08d9275fe158%7C84df9e7fe9f640afb435aaaaaaaaaaaa%7C1%7C0%7C637584115021099979%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=v3O8MrfQ7kC7ivJOGBNk4IHJnmCMDFUm4RQcn9mHhko%3D&amp;reserved=0
> https://apac01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.linkedin.com%2Fcompany%2F37269706%2F&amp;data=04%7C01%7C%7Ce9fd782948fc4ed9160a08d9275fe158%7C84df9e7fe9f640afb435aaaaaaaaaaaa%7C1%7C0%7C637584115021099979%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=c9E38%2FpdHO%2Fq8vEExccNnE31oRrMTlB2zYPwwDws%2BXk%3D&amp;reserved=0
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux