Re: RBD mirroring CLI proposal ...

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> So a pool policy is just a set of feature bits?

It would have to store additional details as well.

> I think Cinder at least creates images with rbd_default_features from
> ceph.conf and adds in layering if it's not set, meaning there is no
> interface for passing through feature bits (or anything else really,
> things like striping options, etc).  What pool-level default feature
> bits infrastructure would do is replace a big (cluster-level) hammer
> with a smaller (pool-level) hammer.  You'd have to add librbd APIs for
> it and someone eventually will try to follow suit and add defaults for
> other settings.  You said you weren't attempting to create a mechanism
> to specify arbitrary default features for a given pool, but I think it
> will come up in the future if we introduce this - it's only logical.
> 
> What we might want to do instead is use this mirroring milestone to add
> support for a proper key-value interface for passing in features and
> other settings for individual rbd images to OpenStack.  I assume it's
> all python dicts with OpenStack, so it shouldn't be hard?  I know that
> getting patches into OpenStack can be frustrating at times and I might
> be underestimating the importance of the use case you have in mind, but
> patching our OpenStack drivers rather than adding what essentially is
> a workaround to librbd makes a lot more sense to me.
> 

It would be less work to skip adding the pool-level defaults which is a plus given everything else required.  However, putting aside how long it would take for the required changes to trickle down from OpenStack, Qemu, etc (since I agree that shouldn't drive design), in some ways your proposal could be seen as blurring the configuration encapsulation between clients and Ceph. 

Is the goal to configure my storage policies in one place or should I have to update all my client configuration settings (not that big of a deal if you are using something like Puppet to push down consistent configs across your servers)? Trying to think like an end-user, I think I would prefer configuring it once within the storage system itself.  I am not familiar with any other storage systems that configure mirroring via OpenStack config files, but I could be wrong since there are a lot of volume drivers now.

I do like the idea of key/value configuration pairs on image create and I had even proposed that a few weeks ago on a separate email thread since we shouldn't keep expanding rbd_create/rbd_clone/rbd_copy for every possible configuration override.  

--

Jason
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux