On 4/14/2012 9:15 PM, Jeff King wrote:
On Sat, Apr 14, 2012 at 09:13:17PM -0500, Neal Kreitzinger wrote:
Does a file's delta-compression efficiency in the pack-file directly
correlate to its efficiency of transmission size/bandwidth in a
git-fetch and git-push? IOW, are big-files also a problem for
git-fetch and git-push by taking too long in a remote transfer?
Yes. The on-the-wire format is a packfile. We create a new packfile on
the fly, so we may find new deltas (e.g., between objects that were
stored on disk in two different packs), but we will mostly be reusing
deltas from the existing packs.
So any time you improve the on-disk representation, you are also
improving the network bandwidth utilization.
We use git to transfer database files from the dev server to
qa-servers. Sometimes these barf for some reason and I get called to
remediate. I assumed the user closed their session prematurely because
it was "taking too long". However, now I'm wondering if the git-pull
--ff-only is dying on its own due to the big-files. It could be that on
a qa-server that hasn't updated database files in awhile they are
pulling way more than another qa-server that does their git-pull more
requently. How would I go about troubleshooting this? Is there some
log files I would look at? (I'm using git 1.7.1 compiled with git
makefile on rhel6.) When I go to remediate do git-reset --hard to clear
out the barfed worktree/index and then run git-pull --ff-only manually
and it always works. I'm not sure if that proves it wasn't git that
barfed the first time. Maybe the first time git brought some stuff over
and barfed because it bit off more than it could chew, but the second
time its really having to chew less food because it already chewed some
of it the first time and therefore works the second time.
v/r,
neal
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html