Re: ceph-ansible in Pacific and beyond?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 18/03/2021 09:09, Janne Johansson wrote:
Den ons 17 mars 2021 kl 20:17 skrev Matthew H <matthew.heler@xxxxxxxxxxx>:

"A containerized environment just makes troubleshooting more difficult, getting access and retrieving details on Ceph processes isn't as straightforward as with a non containerized infrastructure. I am still not convinced that containerizing everything brings any benefits except the collocation of services."

It changes the way you troubleshoot, but I don't find it more difficult in the issues I have seen and had. Even today without containers, all services can be co-located within the same hosts (mons,mgrs,osds,mds).. Is there a situation you've seen where that has not been the case?

New ceph users pop in all the time on the #ceph IRC and have
absolutely no idea on how to see the relevant logs from the
containerized services.

Me being one of the people that do run services on bare metal (and
VMs) I actually can't help them, and it seems several other old ceph
admins can't either.


Me being one of them.

Yes, it's all possible with containers, but it's different. And I don't see the true benefit of running Ceph in Docker just yet.

Another layer of abstraction which you need to understand. Also, when you need to do real emergency stuff like working with ceph-objectstore-tool to fix broken OSDs/PGs it's just much easier to work on a bare-metal box than with containers (if you ask me).

So no, I am not convinced yet. Not against it, but personally I would say it's not the only way forward.

DEB and RPM packages are still alive and kicking.

Wido

Not that it is impossible or might not even be hard to get them, but
somewhere in the "it is so easy to get it up and running, just pop a
container and off you go" docs there seem to be a lack of the parts
"when the OSD crashes at boot, run this to export the file normally
called /var/log/ceph/ceph-osd.12.log" meaning it becomes a black box
to the users and they are left to wipe/reinstall or something else
when it doesn't work. At the end, I guess the project will see less
useful reports with Assert Failed logs from impossible conditions and
more people turning away from something that could be fixed in the
long run.

I get some of the advantages, and for stateless services elsewhere it
might be gold to have containers, I am not equally enthusiastic about
it for ceph.

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux