Re: ceph-ansible and crush location

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 03/11/2021 15:48, Stefan Kooman wrote:
On 11/3/21 15:35, Simon Oosthoek wrote:
Dear list,

I've recently found it is possible to supply ceph-ansible with information about a crush location, however I fail to understand how this is actually used. It doesn't seem to have any effect when create a cluster from scratch (I'm testing on a bunch of vm's generated by vagrant and cloud-init and some custom ansible playbooks).

Then I thought I may need to add the locations to the crushmap by hand and then rerun the site.yml, but this also doesn't update the crushmap.

Then I was looking at the documentation here:
https://docs.ceph.com/en/octopus/rados/operations/crush-map/#crush-location

And it seems ceph is able to update the osd location upon startup, if configured to do so... I don't think this is being used in a cluster generated by ceph-ansible though...

osd_crush_update_on_start is true by default. So you would have to disable it explicitly.

OK, so this isn't happening, because there's no configuration for it in our nodes' /etc/ceph/ceph.conf files...



Would it be possible/wise to modify ceph-ansible to e.g. generate files like /etc/ceph/crushlocation and fill that with information from the inventory, like

Possible: yes. Wise: not sure. If you mess this up for whatever reason, and buckets / OSDs get reshuffled this might lead to massive data movement and possibly even worse, availability issues., i.e. when all your OSDs are moved to buckets that are are not matching any CRUSH rule.

Indeed, getting this wrong is a major PITA, but not having the OSDs in the correct location is also undesirable.

I prefer to document/configure everything in one place, so there aren't any contradicting data. In this light, I would say that ceph-ansible is the right way to set this up. (Now to figure out how and where ;-)

And of course, it's bothersome to maintain a patch on top of the stock ceph-ansible, so it would be really nice if this kind of change could be added upstream to ceph-ansible.

Cheers

/Simon

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux