Re: rgw: zipper store configuration in the zone object

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Sep 2, 2022 at 6:01 PM Kyle Bader <kyle.bader@xxxxxxxxx> wrote:
>
> How do you propose we do bootstrapping?
>
> My intuition is that the RGWZoneParams would detail the store
> configuration, but then it needs to /store/ that configuration
> somewhere.

in https://github.com/ceph/ceph/pull/47679, i've started splitting
these realm/period/zonegroup/zone objects into a separate ConfigStore
that can be loaded first to bootstrap this store configuration

the rados backend stores all of these objects in a '.rgw.root' pool
shared by all zones/radosgws hosted on the cluster, so this seems like
a natural split

>
> Normally I'd use a ceph.conf like this for dbstore
>
> rgw backend store = dbstore
> dbstore db dir = /var/lib/ceph/radosgw
> dbstore db name prefix = dbstore
>
> would I then run something like this?
>
> radosgw-admin zone set --rgw-zone=zoney --infile /etc/ceph/zoney.json

right, either 'zone set' to specify json for the entire RGWZoneParams,
or a new command to set just the 'stores' part like:

$ radosgw-admin zone modify --rgw-zone zoney --zone-stores stores.json

>
> What if we supported something along the lines of
>
> rgw realm store = file
> rgw realm file = /etc/ceph/zoney.json
>
> or
>
> rgw realm store = dbstore
> dbstore ....
>
> or
>
> rgw realm store = mons
> mon_host = [v2:10.0.0.101:3300/0,v1:10.0.0.101:6789/0]
> [v2:10.0.0.102:3300/0,v1:10.0.0.102:6789/0]
> [v2:10.0.0.103:3300/0,v1:10.0.0.103:6789/0]

to fully support multisite, this ConfigStore would need to be
persistent and mutable, ideally with some atomicity guarantees so we
can detect/resolve racing changes to the config

but outside of multisite, other file- or mon-based ConfigStores could
certainly work. rgw can run with only a zone/zonegroup, so such
ConfigStores could return ENOTSUP errors for realm/period operations
to prevent conversions to multisite

the current PR contains a RadosConfigStore and starts on a
DBConfigStore. the DB version will start with sqlite on a local file
(similar to the existing DBStore), so would be comparable to the 'rgw
realm store = file' you proposed above except that it could support
multisite configurations. DBConfigStore could also target a database
server like postgresql to allow its config to be shared between
radosgw nodes without any reliance on a ceph cluster

>
> On Wed, Aug 31, 2022 at 1:14 PM Casey Bodley <cbodley@xxxxxxxxxx> wrote:
> >
> > all rgws in a zone need to serve the same data set, so we've agreed
> > that their store configuration belongs in the RGWZoneParams object
> > that they share. RGWZoneParams can already be parsed/serialized as
> > json, so i'm proposing that we add the store configuration as a nested
> > opaque json object
> >
> > the simplest configuration could look something like this:
> >
> >   "stores": {
> >     "type": "rados"
> >   }
> >
> > a rados backend with a caching filter in front could look like:
> >
> >   "stores": {
> >     "type": "cache",
> >     "size": 512,
> >     "backend": {
> >       "type": "rados"
> >     }
> >   }
> >
> > a more complicated example, where a cloud_filter redirects requests on
> > buckets starting with 'cloud' to a remote s3 service, and satisfies
> > everything else from a cached database backend:
> >
> >   "stores": {
> >     "type": "cloud_filter",
> >     "remote bucket prefix": "cloud",
> >     "remote backend": {
> >       "type": "s3",
> >       "endpoint": "s3.example.com",
> >       "credentials": "password"
> >     },
> >     "local backend": {
> >       "type": "cache",
> >       "backend": {
> >         "type": "dbstore"
> >       }
> >     }
> >   }
> >
> > each store definition requires a "type" so rgw knows which SAL driver
> > to load, but all other fields (including nested backend stores) are
> > only interpreted by that driver
> >
> > so given this json object 'stores', we can write a Store factory that
> > recursively builds this tree of stores and returns the root. for the
> > cloud_filter driver example (with error handling omitted):
> >
> >
> > rgw_sal.cc:
> >
> > unique_ptr<rgw::sal::Store> rgw::sal::load_store(JSONObj* json)
> > {
> >   std::string store_type;
> >   JSONDecoder::decode_json("store", store_type, json);
> >
> >   // load the sal driver and call its sal_create()
> >   auto filename = fmt::format("/path/to/libsal_{}.so", store_type);
> >   void* handle = ::dlopen(filename.c_str(), ...);
> >   sal_create_fn sal_create = ::dlsym(handle, "sal_create");
> >   return sal_create(json);
> > }
> >
> >
> > libsal_cloud_filter.so:
> >
> > rgw::sal::Store* sal_create(JSONObj* json)
> > {
> >   // load the backends
> >   JSONObj* remote_json = json->find("remote backend");
> >   unique_ptr<rgw::sal::Store> remote_backend =
> > rgw::sal::load_store(remote_json);
> >
> >   JSONObj* local_json = json->find("local backend");
> >   unique_ptr<rgw::sal::Store> local_backend = rgw::sal::load_store(local_json);
> >
> >   std::string prefix;
> >   JSONDecoder::decode_json("remote bucket prefix", store_type, json);
> >
> >   return new CloudFilterStore(prefix, std::move(remote_backend),
> > std::move(local_backend));
> > }
> >
> > _______________________________________________
> > Dev mailing list -- dev@xxxxxxx
> > To unsubscribe send an email to dev-leave@xxxxxxx
>

_______________________________________________
Dev mailing list -- dev@xxxxxxx
To unsubscribe send an email to dev-leave@xxxxxxx



[Index of Archives]     [CEPH Users]     [Ceph Devel]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux