Re: "git fsck" fails on malloc of 80 G

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Dec 16, 2013 at 11:05:32AM -0500, Dale R. Worley wrote:

> # git fsck
> Checking object directories: 100% (256/256), done.
> fatal: Out of memory, malloc failed (tried to allocate 80530636801 bytes)
> #

Can you give you give us a backtrace from the die() call? It would help
to know what it was trying to allocate 80G for.

> I don't know if this is due to an outright bug or not.  But it seems
> to me that "git fsck" should not need to allocate any more memory than
> the size (1 GiB) of a single pack file.  And given its purpose, "git
> fsck" should be one of the *most* robust Git tools!

Agreed. Fsck tends to be more robust, but there are still many code
paths that can die(). One of the problems I ran into recently is that
corrupt data can cause it to make a large allocation; we notice the
bogus data as soon as we try to start filling the buffer, but sometimes
the bogus allocation is large enough to kill the process.

That was fixed by b039718, which is in master but not yet any released
version. You might see whether that helps.

-Peff
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]