I recently came across a very annoying problem, characterised by the
following example:
On a recent ubuntu install:
dd if=/dev/zero of=file bs=1300k count=1k
git commit file -m "Add huge file"
The repository can be pulled and pushed successfully to other ubuntu
installs, but on Mac OS X, 10.5.7 machine with 4GB ram git pull
produces:
remote: Counting objects: 6, done.
remote: git(1533,0xb0081000) malloc: *** mmap(size=1363152896) failed
(error code=12)
remote: *** error: can't allocate region
remote: *** set a breakpoint in malloc_error_break to debug
remote: git(1533,0xb0081000) malloc: *** mmap(size=1363152896) failed
(error code=12)
remote: *** error: can't allocate region
remote: *** set a breakpoint in malloc_error_break to debug
remote: fatal: Out of memory, malloc failed
error: git upload-pack: git-pack-objects died with error.
fatal: git upload-pack: aborting due to possible repository corruption
on the remote side.
remote: aborting due to possible repository corruption on the remote
side.
fatal: protocol error: bad pack header
The problem appears to be the different maximum mmap sizes available
on different OSes. Whic I don't really mind the maximum file size
restriction git imposes, this restriction varying from OS to OS is
very annoying, fixing this required rewriting history to remove the
commit, which caused problems as the commit had already been pulled,
and built on, by a number of developers.
If the requirement that all files can be mmapped cannot be easily
removed, would be it perhaps be acceptable to impose a (soft?)
1GB(ish) file size limit? I suggest 1GB as all the OSes I can get hold
of easily (freeBSD, windows, Mac OS X, linux) support a mmap of size >
1GB.
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html