On Thu, 16 Aug 2012, Tommi Virtanen wrote: > On Thu, Aug 16, 2012 at 3:32 PM, Sage Weil <sage@xxxxxxxxxxx> wrote: > > As for the new options, I suggest: > > > > * osd fs type > > * osd fs devs (will work for mkcephfs, not for new stuff) > > * osd fs path > > * osd fs options > > What does osd_fs_path mean, and how is it different from the osd_data dir? The idea was that you might wand the fs mounted somewhere other that osd_data. I'm not sure it's useful; we may as well drop that... > I'm expecting to need both mkfs-time options (btrfs metadata block > size etc) and mount-time options (noatime etc). > > It would be nice if there was a way to set the options for all > fstypes, and then just toggle which one is used (by default). That > avoids bugs like trying to mkfs/mount btrfs with xfs-specific options, > and vice versa. > > I'm not sure how well our config system will handle dynamic variable > names -- ceph-authtool was fine with me just putting data in > osd_crush_location, and we don't need to access these variables from > C++, so it should be fine. If you really wanted to, you could probably > cram the them into a single variable, with ad hoc structured data in > the string value, but that's ugly.. Or just hardcode the list of > possible filesystems, and then it's not dynamic variable names > anymore. Yeah, ceph-conf will happily take anything. The C++ code has to do slightly more work to get arbitrary config fields, but that's not an issue. > So I'm dreaming of something like: > > [osd] > # what mount options will be passed when an osd data disk is using > # one of these filesystems; these are passed to mount -o > osd mount options btrfs = herp,foo=bar > osd mount options xfs = noatime,derp > > # what mkfs options are used when creating new osd data disk > # filesystems > osd mkfs options btrfs = --hur > osd mkfs options xfs = --dur > > # what fstype to use by default when mkfs'ing; mounting will detect > # what's there (with blkid) and work with anything > osd mkfs type = btrfs > > # this may go away with "mkcephfs 2.0", and it will have to get more > # complex if we provide something for journals too, etc, because you > # may want to pair specific data disks to specific journals (DH has > # this need).. haven't had time to think it through, which is why i'm > # leaning toward "and here's a hook where you run something on the > # host that calls ceph-disk-prepare etc on all the disks you want", > # and using uuids to match journals to data disks -- this work has > # not yet started) > osd fs devs = /dev/sdb /dev/sdc This all looks good to me. What do you think, Danny? sage -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html