Hey ceph-users,
I am running two (now) Quincy clusters doing RGW multi-site replication
with only one actually being written to by clients.
The other site is intended simply as a remote copy.
On the primary cluster I am observing an ever growing (objects and
bytes) "sitea.rgw.log" pool, not so on the remote "siteb.rgw.log" which
is only 300MiB and around 15k objects with no growth.
Metrics show that the growth of pool on primary is linear for at least 6
months, so not sudden spikes or anything. Also sync status appears to be
totally happy.
There are also no warnings in regards to large OMAPs or anything similar.
I was under the impression that RGW will trim its three logs (md, bi,
data) automatically and only keep data that has not yet been replicated
by the other zonegroup members?
The config option "ceph config get mgr rgw_sync_log_trim_interval" is
set to 1200, so 20 Minutes.
So I am wondering if there might be some inconsistency or how I can best
analyze what the cause for the accumulation of log data is?
There are older questions on the ML, such as [1], but there was not
really a solution or root cause identified.
I know there is manual trimming, but I'd rather want to analyze the
current situation and figure out what the cause for the lack of
auto-trimming is.
* Do I need to go through all buckets and count logs and look at
their timestamps? Which queries do make sense here?
* Is there usually any logging of the log trimming activity that I
should expect? Or that might indicate why trimming does not happen?
Regards
Christian
[1]
https://lists.ceph.io/hyperkitty/list/ceph-users@xxxxxxx/message/WZCFOAMLWV3XCGJ3TVLHGMJFVYNZNKLD/
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx