Re: [PATCH bpf v2 2/2] bpf: Fix hashtab overflow check on 32-bit arches

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



John Fastabend <john.fastabend@xxxxxxxxx> writes:

> Alexei Starovoitov wrote:
>> On Thu, Feb 29, 2024 at 3:23 AM Toke Høiland-Jørgensen <toke@xxxxxxxxxx> wrote:
>> >
>> > The hashtab code relies on roundup_pow_of_two() to compute the number of
>> > hash buckets, and contains an overflow check by checking if the resulting
>> > value is 0. However, on 32-bit arches, the roundup code itself can overflow
>> > by doing a 32-bit left-shift of an unsigned long value, which is undefined
>> > behaviour, so it is not guaranteed to truncate neatly. This was triggered
>> > by syzbot on the DEVMAP_HASH type, which contains the same check, copied
>> > from the hashtab code. So apply the same fix to hashtab, by moving the
>> > overflow check to before the roundup.
>> >
>> > The hashtab code also contained a check that prevents the total allocation
>> > size for the buckets from overflowing a 32-bit value, but since all the
>> > allocation code uses u64s, this does not really seem to be necessary, so
>> > drop it and keep only the strict overflow check of the n_buckets variable.
>> >
>> > Fixes: daaf427c6ab3 ("bpf: fix arraymap NULL deref and missing overflow and zero size checks")
>> > Signed-off-by: Toke Høiland-Jørgensen <toke@xxxxxxxxxx>
>> > ---
>> >  kernel/bpf/hashtab.c | 10 +++++-----
>> >  1 file changed, 5 insertions(+), 5 deletions(-)
>> >
>> > diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c
>> > index 03a6a2500b6a..4caf8dab18b0 100644
>> > --- a/kernel/bpf/hashtab.c
>> > +++ b/kernel/bpf/hashtab.c
>> > @@ -499,8 +499,6 @@ static struct bpf_map *htab_map_alloc(union bpf_attr *attr)
>> >                                                           num_possible_cpus());
>> >         }
>> >
>> > -       /* hash table size must be power of 2 */
>> > -       htab->n_buckets = roundup_pow_of_two(htab->map.max_entries);
>> >
>> >         htab->elem_size = sizeof(struct htab_elem) +
>> >                           round_up(htab->map.key_size, 8);
>> > @@ -510,11 +508,13 @@ static struct bpf_map *htab_map_alloc(union bpf_attr *attr)
>> >                 htab->elem_size += round_up(htab->map.value_size, 8);
>> >
>> >         err = -E2BIG;
>> > -       /* prevent zero size kmalloc and check for u32 overflow */
>> > -       if (htab->n_buckets == 0 ||
>> > -           htab->n_buckets > U32_MAX / sizeof(struct bucket))
>> > +       /* prevent overflow in roundup below */
>> > +       if (htab->map.max_entries > U32_MAX / 2 + 1)
>> >                 goto free_htab;
>> 
>> No. We cannot artificially reduce max_entries that will break real users.
>> Hash table with 4B elements is not that uncommon.

Erm, huh? The existing code has the n_buckets > U32_MAX / sizeof(struct
bucket) check, which limits max_entries to 134M (0x8000000). This patch
is *increasing* the maximum allowable size by a factor of 16 (to 2.1B or
0x80000000).

> Agree how about return E2BIG in these cases (32bit arch and overflow) and 
> let user figure it out. That makes more sense to me.

Isn't that exactly what this patch does? What am I missing here?

-Toke






[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux