Hi, Lately I noticed that git occasionally do very large re-fetches, despite the difference between local and remote being not large. For example, here are two aborted fetches (see how small the emumerating / counting /compressing are, compared to the receiving object number): $ git fetch torvalds remote: Enumerating objects: 19374, done. remote: Counting objects: 100% (19374/19374), done. remote: Compressing objects: 100% (4016/4016), done. ^Cceiving objects: 2% (161673/7478285), 80.54 MiB | 2.78 MiB/s $ git fetch sound remote: Enumerating objects: 52009, done. remote: Counting objects: 100% (52009/52009), done. remote: Compressing objects: 100% (5480/5480), done. ^Cceiving objects: 1% (74819/7481898), 37.92 MiB | 1.98 MiB/s I don't see any real pattern, as the last few times I fetch from either of those two are also quite small ( a few days ago). $ git --version git version 2.26.2 stanzas from my .git/config [remote "torvalds"] url = git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git fetch = +refs/heads/*:refs/remotes/torvalds/* [remote "sound"] url = git://git.kernel.org/pub/scm/linux/kernel/git/tiwai/sound.git fetch = +refs/heads/*:refs/remotes/sound/* Another general issue is that, I used to have just a few small files under .git/objects/XX, with a few large ones under .git/objects/pack . But at the moment, ".git/objects/pack" is 4.4GB, while ".git/objects/" is 5.2GB . Also, is there a "best practice" if I like to track multiple upstreams (and the convenience of it), while keeping the local repo / heads (the part that "I do", and requiring back-up, etc). I thought of splitting the two into a "mine" repo with "info/alternates" pointing to a pure-fetch-reference-upstream repo; but I am a bit worried about doing a fetch on "mine" before or without updating the "pure-fetch-reference-upstream" repo, in which case I would end up with duplicate objects ending up in mine, and also, would be vulnerable to upstream doing rebase, for example. Any thoughts?