paint_alloc() allocates a big block of memory and splits it into smaller, fixed size, chunks of memory whenever it's called. Each chunk contains enough bits to present all "new refs" [1] in a fetch from a shallow repository. We do not check if the new "big block" is smaller than the requested memory chunk though. If it happens, we'll happily pass back a memory region smaller than expected. Which will lead to problems eventually. A normal fetch may add/update a dozen new refs. Let's stay on the "reasonably extreme" side and say we need 16k refs (or bits from paint_alloc's perspective). Each chunk of memory would be 2k, much smaller than the memory pool (512k). So, normally, the under-allocation situation should never happen. A bad guy, however, could make a fetch that adds more than 4m new/updated refs to this code which results in a memory chunk larger than pool size. Check this case and abort. Noticed-by: Rasmus Villemoes <rv@xxxxxxxxxxxxxxxxxx> [1] Details are in commit message of 58babff (shallow.c: the 8 steps to select new commits for .git/shallow - 2013-12-05), step 6. Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@xxxxxxxxx> --- shallow.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/shallow.c b/shallow.c index 2512ed3..75e1702 100644 --- a/shallow.c +++ b/shallow.c @@ -447,6 +447,9 @@ static uint32_t *paint_alloc(struct paint_info *info) unsigned size = nr * sizeof(uint32_t); void *p; if (!info->pool_count || info->free + size > info->end) { + if (size > POOL_SIZE) + die("BUG: pool size too small for %d in paint_alloc()", + size); info->pool_count++; REALLOC_ARRAY(info->pools, info->pool_count); info->free = xmalloc(POOL_SIZE); -- 2.8.2.524.g6ff3d78