Re: RGW lifecycle not expiring objects

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



That seems to be it! I couldn't see a way to specify the auth version with aws cli (is there a way?). However it did work with s3cmd and v2 auth:

% s3cmd --signature-v2 setlifecycle lifecycle.xml s3://testgta
s3://testgta/: Lifecycle Policy updated

(I believe that with Kraken, this threw an error and failed to set the policy, but I'm not certain at this point... besides which radosgw didn't then have access to the default.rgw.lc pool, which may have caused further issues)

No way to read the lifecycle policy back with s3cmd, so:

% aws --endpoint-url https://xxx.xxx.xxx.xxx s3api \
    get-bucket-lifecycle-configuration --bucket=testgta
{
    "Rules": [
        {
            "Status": "Enabled",
            "Prefix": "",
            "Expiration": {
                "Days": 1
            },
            "ID": "test"
        }
    ]
}

and looks encouraging at the server side:

#  radosgw-admin lc list
[
    {
        "bucket": ":gta:default.6985397.1",
        "status": "UNINITIAL"
    },
    {
        "bucket": ":testgta:default.6790451.1",
        "status": "UNINITIAL"
    }
]

then:
#  radosgw-admin lc process

and all the (very old) objects disappeared from the test bucket.

Thanks!

Graham


On 06/28/2017 09:47 AM, Daniel Gryniewicz wrote:
This is almost certainly because it's using v4 auth, which is not well supported in RGW yet. Can you try with v3 auth?

Daniel

On 06/27/2017 07:57 PM, Graham Allan wrote:
I upgraded my test cluster to Luminous 12.1.0, and a separate problem
made me realize a possible cause for the lifecycle failure.

After the upgrade, radosgw spammed its logfile with "failed to list
reshard log entries" errors. I realized that the radosgw user only had
authorization to access specific osd pools - and not the newly-created
"default.rgw.reshard" pool.

Which made me also realize, it had no access to the new-for-kraken
"default.rgw.lc" pool. That may explain the failure to process lifecycle
below...

The behavior has now changed. Listing lifecycle on my test bucket gives
different output:

% aws --endpoint-url https://xxx.xxx.xxx.xxx s3api \
    get-bucket-lifecycle-configuration --bucket=testgta
{
    "Rules": [
        {
            "Status": "Enabled",
            "Prefix": "",
            "Expiration": {
                "ExpiredObjectDeleteMarker": true
            },
            "ID": "test"
        }
    ]
}

Editing this lifecycle to add back the now-missing "Days" parameter
results in a NotImplemented error:

% aws --endpoint-url https://xxx.xxx.xxx.xxx s3api \
    put-bucket-lifecycle-configuration --bucket=testgta \
    --lifecycle-configuration  file://lifecycle.json

An error occurred (NotImplemented) when calling the
PutBucketLifecycleConfiguration operation: Unknown

and the lifecycle is not updated.

So, I think I understand why it may not have worked before, but the
goalposts seem to have changed to a new problem.

Would appreciate any ideas...

Graham

--
Graham Allan
Minnesota Supercomputing Institute - gta@xxxxxxx
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux