Re: RBD mirroring CLI proposal ...

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Sep 23, 2015 at 9:28 PM, Jason Dillaman <dillaman@xxxxxxxxxx> wrote:
>> So a pool policy is just a set of feature bits?
>
> It would have to store additional details as well.
>
>> I think Cinder at least creates images with rbd_default_features from
>> ceph.conf and adds in layering if it's not set, meaning there is no
>> interface for passing through feature bits (or anything else really,
>> things like striping options, etc).  What pool-level default feature
>> bits infrastructure would do is replace a big (cluster-level) hammer
>> with a smaller (pool-level) hammer.  You'd have to add librbd APIs for
>> it and someone eventually will try to follow suit and add defaults for
>> other settings.  You said you weren't attempting to create a mechanism
>> to specify arbitrary default features for a given pool, but I think it
>> will come up in the future if we introduce this - it's only logical.
>>
>> What we might want to do instead is use this mirroring milestone to add
>> support for a proper key-value interface for passing in features and
>> other settings for individual rbd images to OpenStack.  I assume it's
>> all python dicts with OpenStack, so it shouldn't be hard?  I know that
>> getting patches into OpenStack can be frustrating at times and I might
>> be underestimating the importance of the use case you have in mind, but
>> patching our OpenStack drivers rather than adding what essentially is
>> a workaround to librbd makes a lot more sense to me.
>>
>
> It would be less work to skip adding the pool-level defaults which is a plus given everything else required.  However, putting aside how long it would take for the required changes to trickle down from OpenStack, Qemu, etc (since I agree that shouldn't drive design), in some ways your proposal could be seen as blurring the configuration encapsulation between clients and Ceph.
>
> Is the goal to configure my storage policies in one place or should I have to update all my client configuration settings (not that big of a deal if you are using something like Puppet to push down consistent configs across your servers)? Trying to think like an end-user, I think I would prefer configuring it once within the storage system itself.  I am not familiar with any other storage systems that configure mirroring via OpenStack config files, but I could be wrong since there are a lot of volume drivers now.

I'm not very familiar with OpenStack so I don't know either, I'm just
pointing out that, as far as at least cinder goes, we currently use
a cluster-wide default for something that is inherently a per-image
property, that there is no way to change it, and that there is a way to
configure only a small subset of settings.  I don't see it as blurring
the configuration encapsulation: if a user is creating an image from
OpenStack (or any other client for that matter), they should be able to
specify all the settings they want for a given image and not rely on
cluster-wide or pool-wide defaults.  (Maybe I'm too fixed on this idea
that per-image properties should be per-image and you are trying to
think bigger.  What I'm ranting about here is status quo, mirroring and
the new use cases and configuration challanges it brings along are
somewhat off to the side.)

I'm not against pool-level defaults per se, I just think if we go down
this road it's going to be hard to draw a line in the future, and I want
to make sure we are not adding it just to work around deficiencies in
our OpenStack drivers (and possibly librbd create-like APIs).

Thanks,

                Ilya
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux