On Wed, Mar 05, 2014 at 08:55:30PM -0600, Robert Dailey wrote: > What I'd like to do is somehow hunt down the largest commit (*not* > blob) in the entire history of the repository to hopefully find out > where huge directories have been checked in. > > I can't do a search for largest file (which most google results seem > to show to do) since the culprit is really thousands of unnecessary > files checked into a single subdirectory somewhere in history. > > Can anyone offer me some advice to help me reduce the size of my repo > further? Thanks. I'm not sure if you can do this with git. However since git is a command line application it's pretty easy to script it with sh. The negative part beeing the lack of speed, but since this is a one-time thing I don't think that it matters. Since you told us that it is a commit with a huge number of files that you're looking for, I took that approach instead of calculating the size of each commit, since that would be more expensive to do. for commit in $(git log --pretty=oneline | cut -d " " -f 1) do nbr=$(git show --pretty="format:" --name-only $commit | wc -l) echo "$nbr: $commit" done | sort | tail -1 This will give you the commit with most files changed. (Although, there will be a +1 error on the number of files edited). -- Med vänlig hälsning Fredrik Gustafsson tel: 0733-608274 e-post: iveqy@xxxxxxxxx -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html