Hello Users, We're running 2 Ceph clusters with v17.2.6 and noticing the error message in # radosgw-admin sync error list *"message": "failed to sync bucket instance: (125) Operation canceled"* We've the output as below, [ { "shard_id": 0, "entries": [ { "id": "1_1690711173.869335_133603.1", "section": "data", "name": "b1:d09d3d16-8601-448b-bf3d-609b8a29647d.38987.1:2897", "timestamp": "2023-07-30T09:59:33.869335Z", "info": { "source_zone": "d09d3d16-8601-448b-bf3d-609b8a29647d", "error_code": 125, "message": "failed to sync bucket instance: (125) Operation canceled" } }, { "id": "1_1690711175.505687_133683.1", "section": "data", "name": "b1:d09d3d16-8601-448b-bf3d-609b8a29647d.38987.1:1719", "timestamp": "2023-07-30T09:59:35.505687Z", "info": { "source_zone": "d09d3d16-8601-448b-bf3d-609b8a29647d", "error_code": 125, "message": "failed to sync bucket instance: (125) Operation canceled" } }, and with around 26236 errors # radosgw-admin sync error list | grep -i "(125) Operation canceled" | wc -l 26236 I'm trying to fix these by rewriting objects but I'm having trouble finding the exact object names and the procedure. Any help is really appreciated! Thanks, Jayanth _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx