Containers help them set the initial cluster up tons faster, but it seems as if it leads them into situations where the container's ephemeral state is actively working against their ability to figure out when things go wrong and what the actual cause for that was. Perhaps it is clusters that were adopted into the new style, perhaps they run the containers in the wrong way, but there are a certain amount of posts about "I pressed the button for totally automated (re)deploy of X,Y and Z and it doesn't work". I would not like to end up in this situation while at the same time handling real customers who wonder why our storage is not serving IO at this moment. Doing installs 'manually' is far from optimal, but at least I know the logs end up under /var/log/ceph/<clustername>-<daemon><instance>.log and they stay there even if the OSD disk is totally dead and gone. -- May the most significant bit of your life be positive. _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx