Den ons 17 mars 2021 kl 20:17 skrev Matthew H <matthew.heler@xxxxxxxxxxx>: > > "A containerized environment just makes troubleshooting more difficult, getting access and retrieving details on Ceph processes isn't as straightforward as with a non containerized infrastructure. I am still not convinced that containerizing everything brings any benefits except the collocation of services." > > It changes the way you troubleshoot, but I don't find it more difficult in the issues I have seen and had. Even today without containers, all services can be co-located within the same hosts (mons,mgrs,osds,mds).. Is there a situation you've seen where that has not been the case? New ceph users pop in all the time on the #ceph IRC and have absolutely no idea on how to see the relevant logs from the containerized services. Me being one of the people that do run services on bare metal (and VMs) I actually can't help them, and it seems several other old ceph admins can't either. Not that it is impossible or might not even be hard to get them, but somewhere in the "it is so easy to get it up and running, just pop a container and off you go" docs there seem to be a lack of the parts "when the OSD crashes at boot, run this to export the file normally called /var/log/ceph/ceph-osd.12.log" meaning it becomes a black box to the users and they are left to wipe/reinstall or something else when it doesn't work. At the end, I guess the project will see less useful reports with Assert Failed logs from impossible conditions and more people turning away from something that could be fixed in the long run. I get some of the advantages, and for stateless services elsewhere it might be gold to have containers, I am not equally enthusiastic about it for ceph. -- May the most significant bit of your life be positive. _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx