On Tue, Nov 14, 2017 at 11:37 AM, Kyle Bader <kyle.bader@xxxxxxxxx> wrote: >>>> Using Ceph on any decent scale actually requires one to use at least >>>> Puppet >>>> or similar tool, I wouldn't add any unnecessary complexity to already >>>> complex code just because of novice users that are going to have hard >>>> time >>>> using Ceph anyway once a disk breaks and needs to be replaced, or when >>>> performance goes to hell because users are free to create and remove >>>> snapshots every 5 minutes. > > This discussion reminds me of a heated debate we had in the early days > about whether configuration management should handle the provisioning > of OSDs, or whether Ceph should have a tool to hide the ugliness. At > the time, I was staunchly on the configuration management side. We > used this horribleness to create new OSDs: > > https://github.com/dreamhost-cookbooks/ceph/blob/de5929eb45bda50785aa01181b281e25af0d1785/recipes/osd.rb > > Today we have ceph-disk (and soon to be ceph-volume)! I still have my > reservations about the level of udev wizardry, which is tricky to > debug, but it generally works and makes the experience better the vast > majority of operators, reglardless . This lead to a single method to > prepare OSDs that was configuration management agnostic. Nowadays all > the Ansible/Chef/Puppet thingers use ceph-disk. There is a separation needed here I think, where there are tools and abstractions that work at a local (or close to always local) level. ceph-disk and ceph-volume are good examples of this, since they operate with the context of local devices. At some point during the process they do need to inform the cluster of their operations though (e.g. there is a new OSD, register as part of the cluster). So configuration that makes sense for a localized service like ceph-volume (or ceph-disk) makes sense to be on the server itself. That is why there are abstractions via Puppet/Chef/Ansible for these tools, because these tools are cluster-aware, and they are just delegating to localized services. For configuration it might make sense to have this sort of duality, where some settings and configuration makes sense for the server but the rest is for the cluster. I'm not sure that everything (exclusively) must be file-based or on the monitors. If we are trying to make sure users are happy with these changes, lets accept/embrace views like the one from Piotr, which doesn't mean throwing away ideas on where we should be headed. > -- > To unsubscribe from this list: send the line "unsubscribe ceph-devel" in > the body of a message to majordomo@xxxxxxxxxxxxxxx > More majordomo info at http://vger.kernel.org/majordomo-info.html -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html