Hi all, This is a self inflicted issue but I am wondering if there is a way to recover. Setup: - RGW Multisite with 1 realm, 1 zonegroup and 2 zones. - Metadata and data replication enable (data bi-directional). - Only the master side is currently used by clients. - Ceph 12.2.12 My mistake was to delete all the logs in the rgw.log pool on the master end and some on the secondary side. Data replication is surprisingly working fine but metadata replication is not. When I run "metadata sync init" from the secondary side I see: ----- # radosgw-admin metadata sync init ERROR: sync.init_sync_status() returned ret=-5 2020-01-02 16:10:30.215774 7f26e9544dc0 0 meta sync: ERROR: failed to fetch mdlog info 2020-01-02 16:10:30.215797 7f26e9544dc0 -1 meta sync: ERROR: fail to fetch master log info (r=-5) ----- With rgw-debug=20 and debug-ms=1 I see: ----- received header:HTTP/1.1 500 Internal Server Error ----- On the master side I can run the following commands successfully: # radosgw-admin mdlog status # radosgw-admin mdlog list >From the debug session on the master side I see: ----- cache get: name=default.rgw.log++meta.history : hit (requested=0x1, cached=0x17) failed to decode the mdlog history: buffer::end_of_buffer failed to read mdlog history: (5) Input/output error ----- I can view the meta.history log though: # radosgw-admin log show --object=meta.history { "bucket_id": "", "bucket_owner": "", "bucket": "", "log_entries": [ { "bucket": "", "time": "0.000000", "time_local": "0.000000", "remote_addr": "", "user": "", "operation": "", "uri": "", "http_status": "", "error_code": "", "bytes_sent": 7738151095376705906, "bytes_received": 4856413621961454149, "object_size": 7595168404649370735, "total_time": 0, "user_agent": "", "referrer": "" } ], "log_sum": { "bytes_sent": 7738151095376705906, "bytes_received": 4856413621961454149, "total_time": 0, "total_entries": 1 } } ----- Any ideas on how to further troubleshoot the issue? thx Frank _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx