Hi Cephers... I did set the "lifecycle" via Cyberduck.I do also get an error first, then suddenly Cyberduck refreshes the window aand the lifecycle is there. I see the following when I check it via s3cmd (GitHub master version because the regular installed version doesn't offer the "getlifecycle" option): [root s3cmd-master]# ./s3cmd getlifecycle s3://Test/README.txt <?xml version="1.0" ?> <LifecycleConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/"> <Rule> <ID>Cyberduck-nVWEhQwE</ID> <Prefix/> <Status>Enabled</Status> <Expiration> <Days>1</Days> </Expiration> </Rule> </LifecycleConfiguration> Here is my S3 "user info": [root ~]# radosgw-admin user info --uid=666 { "user_id": "666", "display_name": "First User", "email": "a.b@xxxx", "suspended": 0, "max_buckets": 1000, "auid": 0, "subusers": [], "keys": [ { "user": "666", "access_key": "abc ;)", "secret_key": "abc def ;)" } ], "swift_keys": [], "caps": [], "op_mask": "read, write, delete", "default_placement": "", "placement_tags": [], "bucket_quota": { "enabled": false, "check_on_raw": false, "max_size": -1, "max_size_kb": 0, "max_objects": -1 }, "user_quota": { "enabled": false, "check_on_raw": false, "max_size": -1, "max_size_kb": 0, "max_objects": -1 }, "temp_url_keys": [], "type": "rgw" } If someone has a working example how to set lifecycle via the s3cmd, I can try it and send the outcome... Gesendet: Montag, 03. April 2017 um 01:43 Uhr Von: "Ben Hines" <bhines@xxxxxxxxx> An: "Orit Wasserman" <owasserm@xxxxxxxxxx> Cc: ceph-users <ceph-users@xxxxxxxxxxxxxx> Betreff: Re: Kraken release and RGW --> "S3 bucket lifecycle API has been added. Note that currently it only supports object expiration." Hmm, Nope, not using tenants feature. The users/buckets were created on prior ceph versions, perhaps i'll try with a newly created user + bucket. radosgw-admin user info --uid=foo { "user_id": "foo", "display_name": "foo", "email": "snip", "suspended": 0, "max_buckets": 1000, "auid": 0, "subusers": [ { "id": "foo:swift", "permissions": "full-control" } ], "keys": [ { "user": "foo:swift", "access_key": "xxx", "secret_key": "" }, { "user": "foo", "access_key": "xxx", "secret_key": "xxxx" } ], "swift_keys": [], "caps": [ { "type": "buckets", "perm": "*" }, { "type": "metadata", "perm": "*" }, { "type": "usage", "perm": "*" }, { "type": "users", "perm": "*" }, { "type": "zone", "perm": "*" } ], "op_mask": "read, write, delete", "default_placement": "", "placement_tags": [], "bucket_quota": { "enabled": false, "check_on_raw": false, "max_size": -1024, "max_size_kb": 0, "max_objects": -1 }, "user_quota": { "enabled": false, "check_on_raw": false, "max_size": -1024, "max_size_kb": 0, "max_objects": -1 }, "temp_url_keys": [], "type": "none" } On Sun, Apr 2, 2017 at 5:54 AM, Orit Wasserman <owasserm@xxxxxxxxxx[mailto:owasserm@xxxxxxxxxx]> wrote: I see : acct_user=foo, acct_name=foo, Are you using radosgw with tenants? If not it could be the problem Orit On Sat, Apr 1, 2017 at 7:43 AM, Ben Hines <bhines@xxxxxxxxx[mailto:bhines@xxxxxxxxx]> wrote: I'm also trying to use lifecycles (via boto3) but i'm getting permission denied trying to create the lifecycle. I'm bucket owner with full_control and WRITE_ACP for good measure. Any ideas? This is debug ms=20 debug radosgw=20 2017-03-31 21:28:18.382217 7f50d0010700 2 req 8:0.000693:s3:PUT /bentest:put_lifecycle:verifying op permissions 2017-03-31 21:28:18.382222 7f50d0010700 5 Searching permissions for identity=RGWThirdPartyAccountAuthApplier() -> RGWLocalAuthApplier(acct_user=foo, acct_name=foo, subuser=, perm_mask=15, is_admin=) mask=56 2017-03-31 21:28:18.382232 7f50d0010700 5 Searching permissions for uid=foo 2017-03-31 21:28:18.382235 7f50d0010700 5 Found permission: 15 2017-03-31 21:28:18.382237 7f50d0010700 5 Searching permissions for group=1 mask=56 2017-03-31 21:28:18.382297 7f50d0010700 5 Found permission: 3 2017-03-31 21:28:18.382307 7f50d0010700 5 Searching permissions for group=2 mask=56 2017-03-31 21:28:18.382313 7f50d0010700 5 Permissions for group not found 2017-03-31 21:28:18.382318 7f50d0010700 5 Getting permissions identity=RGWThirdPartyAccountAuthApplier() -> RGWLocalAuthApplier(acct_user=foo, acct_name=foo, subuser=, perm_mask=15, is_admin=) owner=foo perm=8 2017-03-31 21:28:18.382325 7f50d0010700 10 identity=RGWThirdPartyAccountAuthApplier() -> RGWLocalAuthApplier(acct_user=foo, acct_name=foo, subuser=, perm_mask=15, is_admin=) requested perm (type)=8, policy perm=8, user_perm_mask=8, acl perm=8 2017-03-31 21:28:18.382330 7f50d0010700 2 req 8:0.000808:s3:PUT /bentest:put_lifecycle:verifying op params 2017-03-31 21:28:18.382334 7f50d0010700 2 req 8:0.000813:s3:PUT /bentest:put_lifecycle:pre-executing 2017-03-31 21:28:18.382339 7f50d0010700 2 req 8:0.000817:s3:PUT /bentest:put_lifecycle:executing 2017-03-31 21:28:18.382361 7f50d0010700 15 read len=183 data=<LifecycleConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/[http://s3.amazonaws.com/doc/2006-03-01/]"><Rule><Status>Enabled</Status><Expiration><Days>1</Days></Expiration><ID>0</ID></Rule></LifecycleConfiguration> 2017-03-31 21:28:18.382439 7f50d0010700 2 req 8:0.000917:s3:PUT /bentest:put_lifecycle:completing 2017-03-31 21:28:18.382594 7f50d0010700 2 req 8:0.001072:s3:PUT /bentest:put_lifecycle:op status=-13 2017-03-31 21:28:18.382620 7f50d0010700 2 req 8:0.001098:s3:PUT /bentest:put_lifecycle:http status=403 2017-03-31 21:28:18.382665 7f50d0010700 1 ====== req done req=0x7f50d000a340 op status=-13 http_status=403 ====== -Ben On Tue, Mar 28, 2017 at 6:42 AM, Daniel Gryniewicz <dang@xxxxxxxxxx[mailto:dang@xxxxxxxxxx]> wrote: On 03/27/2017 04:28 PM, ceph.novice@xxxxxxxxxxxxxxxx[mailto:ceph.novice@xxxxxxxxxxxxxxxx] wrote:Hi Cephers. Couldn't find any special documentation about the "S3 object expiration" so I assume it should work "AWS S3 like" (?!?) ... BUT ... we have a test cluster based on 11.2.0 - Kraken and I set some object expiration dates via CyberDuck and DragonDisk, but the objects are still there, days after the applied date/time. Do I miss something? Thanks & regards It is intended to work like AWS S3, yes. Not every feature of AWS lifecycle is supported, (for example no moving between storage tiers), but deletion works, and is tested in teuthology runs. Did you somehow turn it off? The config option rgw_enable_lc_threads controls it, but it defaults to "on". Also make sure rgw_lc_debug_interval is not set, and that rgw_lifecycle_work_time isn't set to some interval too small scan your objects... Daniel _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx[mailto:ceph-users@xxxxxxxxxxxxxx] http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx[mailto:ceph-users@xxxxxxxxxxxxxx] http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com[http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com] _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com