$ git --version git version 2.21.0 When fetching/pushing to a forked repo on Github, I've noticed several times that much more objects were being fetched or pushed than were strictly necessary. I'm not sure if it's a bug, or just a opportunity for performance improvement. I got these traces: $ git fetch --all Fetching origin remote: Enumerating objects: 29507, done. remote: Counting objects: 100% (29507/29507), done. remote: Compressing objects: 100% (33/33), done. remote: Total 53914 (delta 29478), reused 29500 (delta 29471), pack-reused 24407 Receiving objects: 100% (53914/53914), 31.90 MiB | 111.00 KiB/s, done. Resolving deltas: 100% (42462/42462), completed with 7382 local objects. -- $ git push -v origin 'refs/replace/*:refs/replace/*' Pushing to XXXX Enumerating objects: 2681, done. Counting objects: 100% (2681/2681), done. Delta compression using up to 8 threads Compressing objects: 100% (1965/1965), done. Writing objects: 100% (2582/2582), 1.96 MiB | 1024 bytes/s, done. Total 2582 (delta 95), reused 1446 (delta 58) remote: Resolving deltas: 100% (95/95), completed with 33 local objects. To XXXX * [new branch] refs/replace/XXXX -> refs/replace/XXXX -- Especially the pushing of a single replace commit involved 2582 objects to be written. This was after first a fetch was done. This especially hurts on flaky and slow connections, especially the more objects need to be written/read, the bigger the chance of the connection failing. In combination with the inability to restart fetches/pushes without down/uploading ALL objects again, this can become quite a frustrating experience. Any thoughts? Regards, Paul