What did you do before the bug happened? (Steps to reproduce your issue) git clone https://github.com/notracking/hosts-blocklists cd hosts-blocklists git reflog expire --all --expire=now && git gc --prune=now --aggressive What did you expect to happen? (Expected behavior) Running gc on a ~300 MB repo should not take 1 hour 55 minutes when running gc on a 2.6 GB repo (LLVM) only takes 24 minutes. What happened instead? (Actual behavior) Command took 1h 55m to complete on a ~300MB repo and used enough resources that the machine is almost unusable. What's different between what you expected and what actually happened? Compression stage uses the majority of the resources and time. Compression itself, when compared to something like zlib or lzma, should not take very long. While more may be happening as objects are compressed, the amount of time gc takes to compress the objects and the resources it consumed are both unreasonable. Memory: RSS = 3451152 KB (3.29 GB), VSZ = 29286272 KB (27.92 GB) Time: 12902.83s user 8995.41s system 315% cpu 1:55:36.73 total I've seen this issue with a number of repos and size of the repo does not determine if this happens. LLVM @ 2.6 GB worked flawlessly, a 900 MB repo never finished, this 300 MB repo takes forever, and if you test something like chromium git will just crash. [System Info] hardware: 2.9Ghz Quad Core i7 git version: git version 2.30.0 cpu: x86_64 no commit associated with this build sizeof-long: 8 sizeof-size_t: 8 shell-path: /bin/sh uname: Darwin 19.6.0 Darwin Kernel Version 19.6.0: Tue Jan 12 22:13:05 PST 2021; root:xnu-6153.141.16~1/RELEASE_X86_64 x86_64 compiler info: clang: 12.0.0 (clang-1200.0.32.28) libc info: no libc information available $SHELL (typically, interactive shell): /usr/local/bin/zsh