I have a git-svn clone that I've been working on which is a full and complete conversion of our SVN repository at work. It started out as 1.4GB (git count-objects -v, looking at 'size-pack'). I have run the following script to clean up a directory in the repo history that I suspect are huge (we had a third party library checked in that, uncompressed, was about 1.2GB in size): ------------------------------- files=$@ echo "Removing: $files..." git filter-branch --index-filter "git rm -rf --cached --ignore-unmatch $files" -- --all # remove the temporary history git-filter-branch otherwise leaves behind for a long time rm -rf .git/refs/original/ && git reflog expire --expire=now --all && git gc --aggressive --prune=now ------------------------------- Even though I seem to have removed it, the repository size (looking at 'size-pack' again) only went down about 200MB, so it's at 1.2GB now. There is about 3-5 years of commit history in this repository. What I'd like to do is somehow hunt down the largest commit (*not* blob) in the entire history of the repository to hopefully find out where huge directories have been checked in. I can't do a search for largest file (which most google results seem to show to do) since the culprit is really thousands of unnecessary files checked into a single subdirectory somewhere in history. Can anyone offer me some advice to help me reduce the size of my repo further? Thanks. -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html