Re: [script] find largest pack objects

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2009.07.10 13:16:50 +1200, Antony Stubbs wrote:
> Blog post about git pruning history and finding large objects in
> your repo: http://stubbisms.wordpress.com/2009/07/10/git-script-to-show-largest-pack-objects-and-trim-your-waist-line/
> 
> This is a script I put together after migrating the Spring Modules
> project from CVS, using git-cvsimport (which I also had to patch, to
> get to work on OS X / MacPorts). I wrote it because I wanted to get
> rid of all the large jar files, and documentation etc, that had been
> put into source control. However, if _large files_ are deleted in
> the latest revision, then they can be hard to track down.

Here's my script, basically for the same purpose, but instead of looking
at the packfiles, it looks at the rev-list output to find those objects
that aren't prunable (ignoring the reflog). I'm also using some kind of
ugly sed invocation to run rev-list only twice, regardless of the number
of objects to be shown, which greatly reduces the time required to run
the script.

#!/bin/sh
git rev-list --all --objects |
	sed -n $(git rev-list --objects --all |
		cut -f1 -d' ' | git cat-file --batch-check | grep blob |
		sort -n -k3 | tail -n$1 | while read hash type size;
		do
			echo -n "-e s/$hash/$size/p ";
		done) |
	sort -n -k1

It takes the number of objects to be shown as an argument, so for the
top ten run as "git find-large 10" (assuming that the script is in $PATH
and called git-find-large).

It doesn't list as much information as yours does, e.g. the compressed
size is missing, but it's good enough for me, and speed was far more
important for me, especially since the "rev-list --all --objects" trick
gets you only a single filename for the blob, so if there were renames,
you may need to run it again after having deleted one version via
filter-branch.

Something similar applies to deltified stuff. As verify-pack shows the
size of the delta, your script might miss some file B if that is a
currently stored as a delta against some other large file A. Only after
the blob for A got deleted, B will be shown (as it is no longer
deltified).

OTOH, this means that the output of my script is likely to have the same
filename over and over again. If that gets out of hand, I usually do
something like:
git find-large 100 | cut -d' ' -f2 | sort -u

So I get just the filenames, hoping that the top 100 include all
interesting things ;-)

Maybe this helps someone to come up with a smart combination of our
scripts.

Björn
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]