On Thu, Jan 24, 2019 at 02:51:50PM +0000, Naum Derzhi wrote: > I have this problem: years ago one of our developers committed a large > (Gigabytes) piece of binary data into our project repository. This > should not have been done, but it happened. (Honest, it was not me). > We never needed this data in the repository. > > Using git rm removes these files from the working tree, but they are > still somewhere in the repository, so when we clone, we transfer > gigabytes of unneeded data. > > Is it possible to fix this problem? You'll have to rewrite the offending history. You can use git-filter-branch. See especially these sections of the manpage: https://git-scm.com/docs/git-filter-branch#_examples https://git-scm.com/docs/git-filter-branch#_checklist_for_shrinking_a_repository as well as the warning in the DESCRIPTION section: WARNING! The rewritten history will have different object names for all the objects and will not converge with the original branch. You will not be able to easily push and distribute the rewritten branch on top of the original branch. Please do not use this command if you do not know the full implications, and avoid using it anyway, if a simple single commit would suffice to fix your problem. (See the "RECOVERING FROM UPSTREAM REBASE" section in git-rebase(1) for further information about rewriting published history.) You may also want to check out the BFG repo cleaner[1], as separate project that handles this case a little more simply (it doesn't save you from dealing with rewritten history, but it does avoid having to learn filter-branch's flexible but confusing syntax). -Peff [1] https://rtyley.github.io/bfg-repo-cleaner/