Hi Casey, already did restart all RGW instances. Only helped for 2 minutes. We now stopped the new site. I will remove and recreate it later. As twi other sites don't have the problem I currently think I made a mistake in the process. Mit freundlichen Grüßen - Boris Behrens > Am 20.06.2023 um 18:30 schrieb Casey Bodley <cbodley@xxxxxxxxxx>: > > hi Boris, > > we've been investigating reports of excessive polling from metadata > sync. i just opened https://tracker.ceph.com/issues/61743 to track > this. restarting the secondary zone radosgws should help as a > temporary workaround > >> On Tue, Jun 20, 2023 at 5:57 AM Boris Behrens <bb@xxxxxxxxx> wrote: >> >> Hi, >> yesterday I added a new zonegroup and it looks like it seems to cycle over >> the same requests over and over again. >> >> In the log of the main zone I see these requests: >> 2023-06-20T09:48:37.979+0000 7f8941fb3700 1 beast: 0x7f8a602f3700: >> fd00:2380:0:24::136 - - [2023-06-20T09:48:37.979941+0000] "GET >> /admin/log?type=metadata&id=62&period=e8fc96f1-ae86-4dc1-b432-470b0772fded&max-entries=100&&rgwx-zonegroup=b39392eb-75f8-47f0-b4f3-7d3882930b26 >> HTTP/1.1" 200 44 - - - >> >> Only thing that changes is the &id. >> >> We have two other zonegroups that are configured identical (ceph.conf and >> period) and these don;t seem to spam the main rgw. >> >> root@host:~# radosgw-admin sync status >> realm 5d6f2ea4-b84a-459b-bce2-bccac338b3ef (main) >> zonegroup b39392eb-75f8-47f0-b4f3-7d3882930b26 (dc3) >> zone 96f5eca9-425b-4194-a152-86e310e91ddb (dc3) >> metadata sync syncing >> full sync: 0/64 shards >> incremental sync: 64/64 shards >> metadata is caught up with master >> >> root@host:~# radosgw-admin period get >> { >> "id": "e8fc96f1-ae86-4dc1-b432-470b0772fded", >> "epoch": 92, >> "predecessor_uuid": "5349ac85-3d6d-4088-993f-7a1d4be3835a", >> "sync_status": [ >> "", >> ... >> "" >> ], >> "period_map": { >> "id": "e8fc96f1-ae86-4dc1-b432-470b0772fded", >> "zonegroups": [ >> { >> "id": "b39392eb-75f8-47f0-b4f3-7d3882930b26", >> "name": "dc3", >> "api_name": "dc3", >> "is_master": "false", >> "endpoints": [ >> ], >> "hostnames": [ >> ], >> "hostnames_s3website": [ >> ], >> "master_zone": "96f5eca9-425b-4194-a152-86e310e91ddb", >> "zones": [ >> { >> "id": "96f5eca9-425b-4194-a152-86e310e91ddb", >> "name": "dc3", >> "endpoints": [ >> ], >> "log_meta": "false", >> "log_data": "false", >> "bucket_index_max_shards": 11, >> "read_only": "false", >> "tier_type": "", >> "sync_from_all": "true", >> "sync_from": [], >> "redirect_zone": "" >> } >> ], >> "placement_targets": [ >> { >> "name": "default-placement", >> "tags": [], >> "storage_classes": [ >> "STANDARD" >> ] >> } >> ], >> "default_placement": "default-placement", >> "realm_id": "5d6f2ea4-b84a-459b-bce2-bccac338b3ef", >> "sync_policy": { >> "groups": [] >> } >> }, >> ... >> >> -- >> Die Selbsthilfegruppe "UTF-8-Probleme" trifft sich diesmal abweichend im >> groüen Saal. >> _______________________________________________ >> ceph-users mailing list -- ceph-users@xxxxxxx >> To unsubscribe send an email to ceph-users-leave@xxxxxxx > _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx