On Mon, May 13, 2019 at 6:56 PM <DHilsbos@xxxxxxxxxxxxxx> wrote: > > All; > > I'm working on spinning up a demonstration cluster using ceph, and yes, I'm installing it manually, for the purpose of learning. > > I can't seem to correctly create an OSD, as ceph-volume seems to only work if the cluster name is the default. If I rename my configuration file (at /etc/ceph/) to ceph.conf, I can manage to create an OSD, but then it fails to start. If you rename your configuration file to something different to what your OSDs were created with, you will end up with an OSD that thinks it belongs to the older config file. ceph-volume does support passing a configuration flag to set the custom cluster name, and it should work fine, but beware custom cluster names are no longer supported even though the flag exists. I bet that if you list your current OSDs with `ceph-volume lvm list --format=json` you will see that the OSD has the older cluster name. This means that the cluster name is "sticky" with the OSD, so that each device that is part of an OSD knows to what cluster it belongs to. If you decide to keep using custom cluster names you will probably end up with issues that are either annoying or plain un-fixable For reference see: https://bugzilla.redhat.com/show_bug.cgi?id=1459861 > > I've tried adding the --cluster argument to ceph-volume, but that doesn't seem to affect anything. > > Any thoughts? > > Thank you, > > Dominic L. Hilsbos, MBA > Director - Information Technology > Perform Air International Inc. > DHilsbos@xxxxxxxxxxxxxx > www.PerformAir.com > > > > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com