Hi Robin Reply in inline. Thanks, Jeegn 2017-12-27 3:00 GMT+08:00 Robin H. Johnson <robbat2@xxxxxxxxxx>: > On Tue, Dec 26, 2017 at 09:48:26AM +0800, Jeegn Chen wrote: >> In the daily use of Ceph RGW cluster, we find some pain points when >> using current one-bucket-one-data-pool implementation. >> I guess one-bucket-multiple-data-pools may help (See the appended >> detailed proposal). >> What do you think? > Overall I like > > Queries/concerns: > - How would this interact w/ the bucket policy lifecycle code? [Jeegn]: My understanding is that current lifecycle code will list all objects in a bucket and delete the out-of-date object. Only the deletion logic is related, which is covered by GC-related change. > - How would this interact w/ existing placement policy in bucket > creation? [Jeegn]: The multiple-pool-support needs data_layout_type in RGWZonePlacementInfo to have value SPLITTED (new) while the default value of data_layout_type is UNIFIED(old). So the existing bucket placement is assumed to have UNIFIED in data_layout_type . To enable this functionality, the admin need to create the new placement policy with SPLITTED data_layout_type set explicitly. Only the bucket created from SPLITTED placement policy will follow the new behavior pattern. > - At the rgw-admin layer, what tooling should exist to migrate objects > between pools for a given bucket? [Jeegn]: I don't expect the objects to be migrated between pools.Old objects uploaded before the tail_pool switch will remain in the original pool until they are deleted explicitly, which is the same behavior in CephFS. > > -- > Robin Hugh Johnson > Gentoo Linux: Dev, Infra Lead, Foundation Asst. Treasurer > E-Mail : robbat2@xxxxxxxxxx > GnuPG FP : 11ACBA4F 4778E3F6 E4EDF38E B27B944E 34884E85 > GnuPG FP : 7D0B3CEB E9B85B1F 825BCECF EE05E6F6 A48F6136 -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html