Re: [kernel-hardening] [PATCH v8 3/3] x86/refcount: Implement fast refcount overflow protection

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Jul 25, 2017 at 5:03 AM, Li Kun <hw.likun@xxxxxxxxxx> wrote:
> Hi Kees,
>
>
> on 2017/7/25 2:35, Kees Cook wrote:
>>
>> +static __always_inline __must_check
>> +int __refcount_add_unless(refcount_t *r, int a, int u)
>> +{
>> +       int c, new;
>> +
>> +       c = atomic_read(&(r->refs));
>> +       do {
>> +               if (unlikely(c == u))
>> +                       break;
>> +
>> +               asm volatile("addl %2,%0\n\t"
>> +                       REFCOUNT_CHECK_LT_ZERO
>> +                       : "=r" (new)
>> +                       : "0" (c), "ir" (a),
>> +                         [counter] "m" (r->refs.counter)
>> +                       : "cc", "cx");
>> +
>> +       } while (!atomic_try_cmpxchg(&(r->refs), &c, new));
>> +
>> +       return c;
>> +}
>
> here when the result LT_ZERO, you will saturate the r->refs.counter and make
> the
>
> atomic_try_cmpxchg(&(r->refs), &c, new) bound to fail first time.
>
> maybe we can just saturate the value of variable "new" ?

Oh, good catch! Thanks. Actually, it's even worse than that: we'll
exit this function without the refcount being correctly saturated. The
final result will be INT_MIN / 2 + a. It's not terrible, but I should
have noticed this in testing. (There was a gap in my testing for the
_not_zero() overflows, which I've fixed now...)

I'll figure this out or revert to the logic I was using in v6...

-Kees

-- 
Kees Cook
Pixel Security



[Index of Archives]     [Linux Kernel]     [Kernel Newbies]     [x86 Platform Driver]     [Netdev]     [Linux Wireless]     [Netfilter]     [Bugtraq]     [Linux Filesystems]     [Yosemite Discussion]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Device Mapper]

  Powered by Linux