Re: Cygwin can't handle huge packfiles?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On Mon, 3 Apr 2006, Johannes Schindelin wrote:
> 
> The problem is not mmap() on cygwin, but that a fork() has to jump through 
> loops to reinstall the open file descriptors on cygwin. If the 
> corresponding file was deleted, that fails. Therefore, we work around that 
> on cygwin by actually reading the file into memory, *not* mmap()ing it.

Well, we could actually do a _real_ mmap on pack-files. The pack-files are 
much better mmap'ed - there we don't _want_ them to be removed while we're 
using them. It was the index file etc that was problematic.

Maybe the cygwin fake mmap should be triggered only for the index (and 
possibly the individual objects - if only because there doing a 
malloc+read may actually be faster).

Using malloc+read on pack-files is pretty wasteful, since we usually only 
use a very small part of them (ie if we have a 1.5GB pack-file, it's sad 
to read all of it, when we'd usually actually access just a small small 
fraction of it).

That said, I think git _does_ have problems with large pack-files. We have 
some 32-bit issues etc, and just virtual address space things. So for now, 
it's probably best to limit pack-files to the few-hundred-meg size, and 
create serveral smaller ones rather than one huge one.

		Linus
-
: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]