Re: [RFC PATCH v2 5/5] range-diff: fix integer overflow & segfault on cost[i + n * j]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Dec 10, 2021 at 01:30:42PM +0100, Ævar Arnfjörð Bjarmason wrote:

> in preceding commits the "column_count" and the "int*"'s we malloc()
> were changed to track their length with a size_t, so we're able to
> track as many "cost" items as malloc() will give us.
> 
> But we'd still segfault on relatively large range comparisons,
> e.g. this would segfault:
> 
>     git -P range-diff --creation-factor=50 origin/master...git-for-windows/main
> 
> The reason for that is that we'd still use integer types to compute an
> array index into the "cost" array, which would overflow. The result of
> a signed overflow in C is undefined, but on my system it'll result in
> a negative number, and a prompt segfault as we'll try to access a
> negative array index.

Note that this isn't just access. We'll first write to cost[i + n * j]
to start. In practice because of the iteration, and signed overflow
being implemented as it usually is, we'd always first write a single int
that's 2GB before the array. And that should segfault.

I do wonder if this can be turned into a heap overflow exploit. I think
you'd probably need to manage to get 2GB on the heap to avoid an
immediate segfault.

> Luckily we used the COST() macro in linear-assignment.c already for
> all of these lookups, and in a preceding commit we renamed "n" in
> "range-diff.c"'s get_correspondences() to "column_count" in
> preparation for using it here.
> 
> So let's use it for the three occurrences of "cost" indexing in
> range-diff.c, and have the COST() macro itself do overflow checking
> with st_mult() and st_add(). Due to the cast to "size_t" from "int"
> we'll avoid the segfault, and will end up correctly pointing to the
> relevant "int *".

Is it actually necessary to do bounds checking here? If we know the
arrays are sized correctly, and we use an appropriate integer type,
wouldn't we know that our computations are always in bounds?

(I saw your other discussion of the unreliability of ssize_t; if we
don't want to assume it's of the same magnitude as size_t, then intmax_t
would work).

The reason I ask in particular is that I wonder if these non-intrinsic
st_* helpers might introduce a measurable slowdown. When I suggested
them earlier it was because I was also suggesting that we'd have done
all of our bounds-checks up front, during the allocation.

> It's still possible for us to overflow even with this change, that's
> because the iteration variables (such as "i" and "j" in this diff
> context are all "int"), even if we changed those to "size_t" or
> "intmax_t" (not trivial, as we depend on them being negative in some
> places) the underlying "struct string_list"'s "nr" member is an
> "unsigned int", which would eventually overflow.

The string_list overflow is something I do think we ought to fix. But we
know from past experiments that it can't actually cause a heap overflow.
Can overflowing one of the ints, though?

If we're computing i*n+j, and j goes negative, then cast to a size_t
that will turn into a big number. But depending how negative it is, it
might not overflow a size_t. But it would still be well outside the
bounds of the allocated array. E.g., consider code like this:

        int j = INT_MAX;
        while (1) {
                printf("int = %d\n", j);
                printf("size_t = %"PRIuMAX"\n", (uintmax_t)st_add(0, j));
                j++;
        }

which shows what happens when i=0, but j approaches overflow. We wrap to
-2^31, which is a large but representable size_t. So st_add() does not
trigger, but I think we'd still be out of bounds.

I suspect it's OK in practice from a security perspective, because it's
so far out of bounds as to cause a segfault and not any kind of heap
overflow. But it really feels like the fix is incomplete. Whereas using
the correct types avoids the segfault.

> We're unlikely to encounter a 2-4 billion commit history on 32 bit
> platforms. Even if we did one of the types in the underlying object
> machinery would probably overflow before we overflowed here. So let's
> punt that for now. If we're ever going to solve that issue [1] to
> change the "struct string_list"'s "nr" member to a "size_t" might be a
> good start.

I'm less concerned about "unlikely" and more about "what can bad actors
trigger". 2 billion is probably out of reach in practice though
(typically I've seen things get untenable around a few hundred million
objects total).

Still, it feels a bit hand-wavy.

-Peff



[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux