Re: rgw: zipper store configuration in the zone object

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



How do you propose we do bootstrapping?

My intuition is that the RGWZoneParams would detail the store
configuration, but then it needs to /store/ that configuration
somewhere.

Normally I'd use a ceph.conf like this for dbstore

rgw backend store = dbstore
dbstore db dir = /var/lib/ceph/radosgw
dbstore db name prefix = dbstore

would I then run something like this?

radosgw-admin zone set --rgw-zone=zoney --infile /etc/ceph/zoney.json

What if we supported something along the lines of

rgw realm store = file
rgw realm file = /etc/ceph/zoney.json

or

rgw realm store = dbstore
dbstore ....

or

rgw realm store = mons
mon_host = [v2:10.0.0.101:3300/0,v1:10.0.0.101:6789/0]
[v2:10.0.0.102:3300/0,v1:10.0.0.102:6789/0]
[v2:10.0.0.103:3300/0,v1:10.0.0.103:6789/0]

On Wed, Aug 31, 2022 at 1:14 PM Casey Bodley <cbodley@xxxxxxxxxx> wrote:
>
> all rgws in a zone need to serve the same data set, so we've agreed
> that their store configuration belongs in the RGWZoneParams object
> that they share. RGWZoneParams can already be parsed/serialized as
> json, so i'm proposing that we add the store configuration as a nested
> opaque json object
>
> the simplest configuration could look something like this:
>
>   "stores": {
>     "type": "rados"
>   }
>
> a rados backend with a caching filter in front could look like:
>
>   "stores": {
>     "type": "cache",
>     "size": 512,
>     "backend": {
>       "type": "rados"
>     }
>   }
>
> a more complicated example, where a cloud_filter redirects requests on
> buckets starting with 'cloud' to a remote s3 service, and satisfies
> everything else from a cached database backend:
>
>   "stores": {
>     "type": "cloud_filter",
>     "remote bucket prefix": "cloud",
>     "remote backend": {
>       "type": "s3",
>       "endpoint": "s3.example.com",
>       "credentials": "password"
>     },
>     "local backend": {
>       "type": "cache",
>       "backend": {
>         "type": "dbstore"
>       }
>     }
>   }
>
> each store definition requires a "type" so rgw knows which SAL driver
> to load, but all other fields (including nested backend stores) are
> only interpreted by that driver
>
> so given this json object 'stores', we can write a Store factory that
> recursively builds this tree of stores and returns the root. for the
> cloud_filter driver example (with error handling omitted):
>
>
> rgw_sal.cc:
>
> unique_ptr<rgw::sal::Store> rgw::sal::load_store(JSONObj* json)
> {
>   std::string store_type;
>   JSONDecoder::decode_json("store", store_type, json);
>
>   // load the sal driver and call its sal_create()
>   auto filename = fmt::format("/path/to/libsal_{}.so", store_type);
>   void* handle = ::dlopen(filename.c_str(), ...);
>   sal_create_fn sal_create = ::dlsym(handle, "sal_create");
>   return sal_create(json);
> }
>
>
> libsal_cloud_filter.so:
>
> rgw::sal::Store* sal_create(JSONObj* json)
> {
>   // load the backends
>   JSONObj* remote_json = json->find("remote backend");
>   unique_ptr<rgw::sal::Store> remote_backend =
> rgw::sal::load_store(remote_json);
>
>   JSONObj* local_json = json->find("local backend");
>   unique_ptr<rgw::sal::Store> local_backend = rgw::sal::load_store(local_json);
>
>   std::string prefix;
>   JSONDecoder::decode_json("remote bucket prefix", store_type, json);
>
>   return new CloudFilterStore(prefix, std::move(remote_backend),
> std::move(local_backend));
> }
>
> _______________________________________________
> Dev mailing list -- dev@xxxxxxx
> To unsubscribe send an email to dev-leave@xxxxxxx
_______________________________________________
Dev mailing list -- dev@xxxxxxx
To unsubscribe send an email to dev-leave@xxxxxxx



[Index of Archives]     [CEPH Users]     [Ceph Devel]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux