I'm currently running test pools using mkcephfs, and am now investigating deploying using ceph-deploy. I've hit a couple of conceptual changes which I can't find any documentation for, and was wondering if someone here could give me some answers as to how things now work. While ceph-deploy creates an initial ceph.conf, it doesn't update this when I do things like 'osd create' to add new osd sections. However when I restart the ceph service, it picks up the new osds quite happily. How does it 'know' what osds it should be starting, and with what configuration? Since there are no sections corresponding to theses new osds, how do I go about adding specific configuration for them - such as 'cluster addr' and then push this new config? Or is there a way to pass in custom configuration to the 'osd create' subcommand at the point of osd creation? I have subsequently updated the [osd] section to set 'osd_mount_options_xfs' and done a 'config push' - however the mount options don't seem to change when I restart the ceph service. Any clues why this might be? Thanks, Matthew -- The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336.
Attachment:
signature.asc
Description: OpenPGP digital signature
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com