rgw multisite and metadata abstractions/backends

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



multisite's metadata sync relies on some abstractions in order to
transfer metadata between zones in json format. it also requires that
any time we write changes to a metadata object, we also write to the
metadata log so other zones know when/what they need to sync

the main abstraction is the RGWMetadataHandler, which we implement for
each kind of metadata. so we have a RGWBucketInstanceMetadataHandler
that knows how to json-encode/decode and read/write bucket instances,
a RGWUserMetadataHandler for users, etc etc

in addition to json and reading/writing, these handlers also have some
important logic in them. for example, when
RGWBucketInstanceMetadataHandler writes a new bucket instance we
haven't seen before, it will also create and initialize its bucket
index objects

the 'archive zone' uses special handler wrappers like
RGWArchiveBucketMetadataHandler and
RGWArchiveBucketInstanceMetadataHandler to force-enable object
versioning, preserve deleted buckets, etc


in https://github.com/ceph/ceph/pull/28679, we made a lot of changes
to move the actual rados reads/writes into 'metadata backends'. Yehuda
did a good job documenting the motivations and design decisions in the
PR description, so i'd encourage everyone to read through that

this predated the zipper work, but shared a similar goal of non-rados
backends. however, this metadata backend stuff is complicated without
providing any tangible benefits, while also making it significantly
more difficult to add new types of metadata (see Abhishek's work to
support role metadata in https://github.com/ceph/ceph/pull/37679). i'd
like to see these backends reimagined in terms of zipper to allow
metadata sync between different stores, but i think there are some
open questions here:

* zipper just has a Bucket interface - the distinction between the
'bucket entrypoint' and 'bucket instance' metadata is specific to the
rados store

* the split between MetadataHandler logic and the backends doesn't
seem right, at least in the case of the
RGWBucketInstanceMetadataHandler creating index objects, where the
bucket index itself is a detail of the rados store


i feel like a better approach would be for each zipper store to
implement the MetadataHandlers itself instead of trying to maintain
the separation between the metadata handlers and backends. so each
store would have functions like create_bucket_metadata_handler(),
create_user_metadata_handler(), etc

then for the archive zone, its handlers would wrap the handlers it
gets from zipper, and use generic zipper APIs to implement the extra
stuff like enabling versioning
_______________________________________________
Dev mailing list -- dev@xxxxxxx
To unsubscribe send an email to dev-leave@xxxxxxx



[Index of Archives]     [CEPH Users]     [Ceph Devel]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux