Point 1 (Why are we running as root?): All Ceph containers are instantiated as root (Privileged - for "reasons") but daemons inside the container run a user 167 ("ceph" user). I don't understand your second point, if you're saying that the "container" is what specifies mount points that's incorrect. It's the "docker run" instantiation of the container that specifies what mount points are passed to the container and that is controlled by "cephadm" today. The length of validity of a mutual TLS certificate means nothing if a hacker compromises the key. On 1/28/22, 8:35 AM, "Marc" <Marc@xxxxxxxxxxxxxxxxx> wrote: > > Hey folks - We’ve been using a hack to get bind mounts into our manager > containers for various reasons. We’ve realized that this quickly breaks > down when our “hacks” don’t exist inside “cephadm” in the manager > container and we execute a “ceph orch upgrade”. Is there an official way > to add a bind mount to a manager container? I am not really an expert on the use of cephadm or containers but. Are these things not wrong in your 'hack' thinking. 1. that would imply that you always have to run this as eeehhh root? 2. afaik is best practice that your oc supplies volumes to your container. > Our use case: We’re using zabbix_sender + Zabbix to monitor Ceph however > we use a certificate to encrypt monitoring traffic that we need the > ability to rotate. Generate long term certificates from your own ca. OT: stop hacking _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx