On Tue, Dec 4, 2018 at 2:42 PM Matthew Pounsett <matt@xxxxxxxxxxxxx> wrote:
Going to take another stab at this...We have a development environment–made up of VMs–for developing and testing the deployment tools for a particular service that depends on cephfs for sharing state data between hosts. In production we will be using filestore OSDs because of the very low volume of data (a few hundred kilobytes) and the very low rate of change. There's insufficient performance benefit for it to make sense for us to create an operational exception by configuring the hardware differently from everything else just to have separate block devices.Unfortunately, even though the documentation says that filestore OSDs are well tested and supported, they don't seem to be well documented.
Sadly the documentation on ceph-deploy is bit behind :(
In a recent test of our deployment tools (using Kraken on Centos/7) the 'ceph-deploy osd' steps failed. Assuming this was simply because Kraken is now so far past EOL that it just wasn't supported properly on an updated Centos box I started working on an update to Luminos. However, I've since discovered that the problem is actually that ceph-deploy's OSD 'prepare' and 'activate' commands have been deprecated regardless of ceph release. I now realize that ceph-deploy is maintained independently from the rest of ceph, but not documented independently, so the ceph documentation that references ceph-deploy seems to now be frequently incorrect.Except where mentioned otherwise, the rest of this is using the latest Luminos from the download.ceph.com Yum archive (12.2.10) with ceph-deploy 2.0.1Our scripts, written for Kraken, were doing this to create filestore OSDs on four dev VMs:ceph-deploy osd prepare tldhost01:/var/local/osd0 tldhost02:/var/local/osd0 tldhost03:/var/local/osd0 tldhost04:/var/local/osd0ceph-deploy osd activate tldhost01:/var/local/osd0 tldhost02:/var/local/osd0 tldhost03:/var/local/osd0 tldhost04:/var/local/osd0
you are using HOST:DIR option which is bit old and I think it was supported till jewel, since you are using 2.0.1 you should be using only 'osd create' with logical volume or full block device as defined here: http://docs.ceph.com/docs/mimic/ceph-volume/lvm/prepare/#ceph-volume-lvm-prepare-filestore , ceph-deploy calls ceph-volume with the same underlying syntax. Since this is a VM, you can just add addtional smaller raw devices (eg: /dev/sde ) and use that for journal.
Both 'prepare' and 'activate' seem to be completely deprecated now (neither shows up in the help output generated when the above commands fail) in Kraken and Luminos. This seems to have changed in the last 60 days or so. The above commands now fail with this error:usage: ceph-deploy osd [-h] {list,create} ...ceph-deploy osd: error: argument subcommand: invalid choice: 'prepare' (choose from 'list', 'create')I'm trying to figure out the 'ceph-deploy osd create' syntax to duplicate the above, but the documentation is no help. The Luminos documentation still shows the above prepare/activate syntax should be valid, and continues to show the journal path as being optional for the 'ceph-deploy osd create' command.The same documentation for Mimic seems to be updated for the new ceph-deploy syntax, including the elimination of 'prepare' and 'activate', but doesn't include specifics for a filestore deployment:The new syntax seems to suggest I can now only do one host at a time, and must split up the host, data, and journal values. After much trial and error I've also found it's now required to specify the journal path, but not knowing for sure what ceph-deploy was doing in the background with the journal path by default before, I've had a hard time sorting out things to try with the new syntax. Following the above logic, and skipping over a few things I've tried to get here, in my latest attempt I've moved the ceph data down one level in the directory tree and added a journal directory. Where tldhost01 is localhost:mkdir -p /var/local/ceph/{osd0,journal}ceph-deploy osd create --data /var/local/ceph/osd0 --journal /var/local/ceph/journal --filestore tldhost01The assumption in this is that --data and --journal accept filesystem paths the same way the 'prepare' and 'activate' commands used to, but that is clearly not the case, as the above complains that I have not supplied block devices. It looks like --filestore is not doing what I hoped.
you cannot use DIR as option to --data and --journal as explained above. --filestore doesn't actually mean filesystem option here, It needs a block device and will automatically create fs on the block device.
_______________________________________________At this point I'm stuck. I've gone through all the documentation I can find, and although it frequently mentions that ceph started by storing its data on the filesystem and that doing so is still well supported, I can't actually find any documentation that says how to do it. When we started this project we used information from the quickstart documents to get filestore OSDs set up, but even the quickstart documents don't seem to supply that information (anymore).Thanks for any pointers anyone can supply.Matt
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com