I still haven't seen anything get expired from our kraken (11.2.0) system.
When I run "radosgw-admin lc list" I get no output, besides debug output
(I have "debug rgw = 10" at present):
# radosgw-admin lc list
2017-06-06 10:57:49.319576 7f2b26ffd700 2
RGWDataChangesLog::ChangesRenewThread: start
2017-06-06 10:57:49.350646 7f2b49558c80 10 Cannot find current period
zone using local zone
2017-06-06 10:57:49.379065 7f2b49558c80 2 all 8 watchers are set,
enabling cache
[]
2017-06-06 10:57:49.399538 7f2b49558c80 2 removed watcher, disabling cache
Unclear to me whether the debug message about "Cannot find current
period zone using local zone" is related or indicates a problem.
Currently all the lc config is more or less default, eg a few values:
# ceph --show-config|grep rgw_|grep lifecycle
rgw_lifecycle_enabled = true
rgw_lifecycle_thread = 1
rgw_lifecycle_work_time = 00:00-06:00
Graham
On 06/05/2017 01:07 PM, Ben Hines wrote:
FWIW lifecycle is working for us. I did have to research to find the
appropriate lc config file settings, the documentation for which is
found in a git pull request (waiting for another release?) rather than
on the Ceph docs site. https://github.com/ceph/ceph/pull/13990
Try these:
debug rgw = 20
rgw lifecycle work time = 00:01-23:59
and see if you have lifecycles listed when you run:
radosgw-admin lc list
2017-06-05 10:58:00.473957 7f3429f77c80 0 System already converted
[
{
"bucket": ":bentest:default.653959.6",
"status": "COMPLETE"
},
{
"bucket": ":<redacted>:default.24713983.1",
"status": "PROCESSING"
},
{
"bucket": ":<redacted>:default.24713983.2",
"status": "PROCESSING"
},
....
At 10 loglevel, the lifecycle processor logs 'DELETED' each time it
deletes something:
https://github.com/ceph/ceph/blob/master/src/rgw/rgw_lc.cc#L388
grep --text DELETED client.<hostname>.log | wc -l
121853
-Ben
On Mon, Jun 5, 2017 at 6:16 AM, Daniel Gryniewicz <dang@xxxxxxxxxx
<mailto:dang@xxxxxxxxxx>> wrote:
Kraken has lifecycle, Jewel does not.
Daniel
On 06/04/2017 07:16 PM, ceph.novice@xxxxxxxxxxxxxxxx
<mailto:ceph.novice@xxxxxxxxxxxxxxxx> wrote:
grrr... sorry && and again as text :|
Gesendet: Montag, 05. Juni 2017 um 01:12 Uhr
Von: ceph.novice@xxxxxxxxxxxxxxxx
<mailto:ceph.novice@xxxxxxxxxxxxxxxx>
An: "Yehuda Sadeh-Weinraub" <yehuda@xxxxxxxxxx
<mailto:yehuda@xxxxxxxxxx>>
Cc: "ceph-users@xxxxxxxxxxxxxx
<mailto:ceph-users@xxxxxxxxxxxxxx>" <ceph-users@xxxxxxxxxxxxxx
<mailto:ceph-users@xxxxxxxxxxxxxx>>, ceph-devel@xxxxxxxxxxxxxxx
<mailto:ceph-devel@xxxxxxxxxxxxxxx>
Betreff: Re: RGW lifecycle not expiring objects
Hi (again) Yehuda.
Looping in ceph-devel...
Could it be that lifecycle is still not implemented neither in
Jewel nor in Kraken, even if release notes and other places say so?
https://www.spinics.net/lists/ceph-devel/msg34492.html
<https://www.spinics.net/lists/ceph-devel/msg34492.html>
https://github.com/ceph/ceph-ci/commit/7d48f62f5c86913d8f00b44d46a04a52d338907c
<https://github.com/ceph/ceph-ci/commit/7d48f62f5c86913d8f00b44d46a04a52d338907c>
https://github.com/ceph/ceph-ci/commit/9162bd29594d34429a09562ed60a32a0703940ea
<https://github.com/ceph/ceph-ci/commit/9162bd29594d34429a09562ed60a32a0703940ea>
Thanks & regards
Anton
Gesendet: Sonntag, 04. Juni 2017 um 21:34 Uhr
Von: ceph.novice@xxxxxxxxxxxxxxxx
<mailto:ceph.novice@xxxxxxxxxxxxxxxx>
An: "Yehuda Sadeh-Weinraub" <yehuda@xxxxxxxxxx
<mailto:yehuda@xxxxxxxxxx>>
Cc: "ceph-users@xxxxxxxxxxxxxx
<mailto:ceph-users@xxxxxxxxxxxxxx>" <ceph-users@xxxxxxxxxxxxxx
<mailto:ceph-users@xxxxxxxxxxxxxx>>
Betreff: Re: RGW lifecycle not expiring objects
Hi Yahuda.
Well, here we go:
http://tracker.ceph.com/issues/20177[http://tracker.ceph.com/issues/20177]
<http://tracker.ceph.com/issues/20177%5Bhttp://tracker.ceph.com/issues/20177%5D>
As it's my first one, hope it's ok as it is...
Thanks & regards
Anton
Gesendet: Samstag, 03. Juni 2017 um 00:14 Uhr
Von: "Yehuda Sadeh-Weinraub" <yehuda@xxxxxxxxxx
<mailto:yehuda@xxxxxxxxxx>>
An: ceph.novice@xxxxxxxxxxxxxxxx
<mailto:ceph.novice@xxxxxxxxxxxxxxxx>
Cc: "Graham Allan" <gta@xxxxxxx <mailto:gta@xxxxxxx>>,
"ceph-users@xxxxxxxxxxxxxx <mailto:ceph-users@xxxxxxxxxxxxxx>"
<ceph-users@xxxxxxxxxxxxxx <mailto:ceph-users@xxxxxxxxxxxxxx>>
Betreff: Re: RGW lifecycle not expiring objects
Have you opened a ceph tracker issue, so that we don't lose track of
the problem?
Thanks,
Yehuda
On Fri, Jun 2, 2017 at 3:05 PM, <ceph.novice@xxxxxxxxxxxxxxxx
<mailto:ceph.novice@xxxxxxxxxxxxxxxx>> wrote:
Hi Graham.
We are on Kraken and have the same problem with "lifecycle".
Various (other) tools like s3cmd or CyberDuck do show the
applied "expiration" settings, but objects seem never to be
purged.
If you should have new findings, hints,... PLEASE share/let
me know.
Thanks a lot!
Anton
Gesendet: Freitag, 19. Mai 2017 um 22:44 Uhr
Von: "Graham Allan" <gta@xxxxxxx <mailto:gta@xxxxxxx>>
An: ceph-users@xxxxxxxxxxxxxx <mailto:ceph-users@xxxxxxxxxxxxxx>
Betreff: RGW lifecycle not expiring objects
I've been having a hard time getting the s3 object lifecycle
to do
anything here. I was able to set a lifecycle on a test
bucket. As others
also seem to have found, I do get an EACCES error on setting the
lifecycle, but it does however get stored:
% aws --endpoint-url
https://xxx.xxx.xxx.xxx[https://xxx.xxx.xxx.xxx
<https://xxx.xxx.xxx.xxx>][https://xxx.xxx.xxx.xxx
<https://xxx.xxx.xxx.xxx>[https://xxx.xxx.xxx.xxx
<https://xxx.xxx.xxx.xxx>]] s3api
get-bucket-lifecycle-configuration --bucket=testgta
{
"Rules": [
{
"Status": "Enabled",
"Prefix": "",
"Expiration": {
"Days": 3
},
"ID": "test"
}
]
}
but many days later I have yet to see any object actually
get expired.
There are some hints in the rgw log that the expiry thread
does run
periodically:
2017-05-19 03:49:03.281347 7f74f1134700 2
RGWDataChangesLog::ChangesRenewThread: start
2017-05-19 03:49:16.356022 7f74ef931700 2 object
expiration: start
2017-05-19 03:49:16.356036 7f74ef931700 20 proceeding
shard = obj_delete_at_hint.0000000000
2017-05-19 03:49:16.359785 7f74ef931700 20 proceeding
shard = obj_delete_at_hint.0000000001
2017-05-19 03:49:16.364667 7f74ef931700 20 proceeding
shard = obj_delete_at_hint.0000000002
2017-05-19 03:49:16.369636 7f74ef931700 20 proceeding
shard = obj_delete_at_hint.0000000003
...
2017-05-19 03:49:16.803270 7f74ef931700 20 proceeding
shard = obj_delete_at_hint.0000000126
2017-05-19 03:49:16.806423 7f74ef931700 2 object
expiration: stop
"radosgw-admin lc process" gives me no output unless I
enable debug, then:
]# radosgw-admin lc process
2017-05-19 15:28:46.383049 7fedb9ffb700 2
RGWDataChangesLog::ChangesRenewThread: start
2017-05-19 15:28:46.421806 7feddc240c80 10 Cannot find
current period zone using local zone
2017-05-19 15:28:46.453431 7feddc240c80 2 all 8 watchers
are set, enabling cache
2017-05-19 15:28:46.614991 7feddc240c80 2 removed
watcher, disabling cache
"radosgw-admin lc list" seems to return "empty" output:
# radosgw-admin lc list
[]
Is there anything obvious that I might be missing?
Graham
--
Graham Allan
Minnesota Supercomputing Institute - gta@xxxxxxx
<mailto:gta@xxxxxxx>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx <mailto:ceph-users@xxxxxxxxxxxxxx>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com[http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com][http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com[http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com]][http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com[http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com][http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com[http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com]]][http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com[http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com][http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com[http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com]][http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com[http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com][http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com[http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com]]]]
<http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com%5Bhttp://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com%5D%5Bhttp://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com%5Bhttp://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com%5D%5D%5Bhttp://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com%5Bhttp://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com%5D%5Bhttp://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com%5Bhttp://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com%5D%5D%5D%5Bhttp://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com%5Bhttp://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com%5D%5Bhttp://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com%5Bhttp://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com%5D%5D%5Bhttp://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com%5Bhttp://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com%5D%5Bhttp://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com%5Bhttp://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com%5D%5D%5D%5D>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx <mailto:ceph-users@xxxxxxxxxxxxxx>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com[http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com][http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com[http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com]][http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com[http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com][http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com[http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com]]]
<http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com%5Bhttp://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com%5D%5Bhttp://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com%5Bhttp://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com%5D%5D%5Bhttp://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com%5Bhttp://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com%5D%5Bhttp://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com%5Bhttp://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com%5D%5D%5D>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx <mailto:ceph-users@xxxxxxxxxxxxxx>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com[http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com][http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com[http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com]]
<http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com%5Bhttp://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com%5D%5Bhttp://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com%5Bhttp://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com%5D%5D>
_______________________________________________ ceph-users
mailing list ceph-users@xxxxxxxxxxxxxx
<mailto:ceph-users@xxxxxxxxxxxxxx>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com[http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com]
<http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com%5Bhttp://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com%5D>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx <mailto:ceph-users@xxxxxxxxxxxxxx>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
<http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx <mailto:ceph-users@xxxxxxxxxxxxxx>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
<http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Graham Allan
Minnesota Supercomputing Institute - gta@xxxxxxx
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com