Re: [PATCH 1/5] mm/sl[au]b: Unify __ksize()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2/23/22 20:06, Marco Elver wrote:
> On Wed, 23 Feb 2022 at 19:39, Vlastimil Babka <vbabka@xxxxxxx> wrote:
>> On 2/21/22 11:53, Hyeonggon Yoo wrote:
>> > Only SLOB need to implement __ksize() separately because SLOB records
>> > size in object header for kmalloc objects. Unify SLAB/SLUB's __ksize().
>> >
>> > Signed-off-by: Hyeonggon Yoo <42.hyeyoo@xxxxxxxxx>
>> > ---
>> >  mm/slab.c        | 23 -----------------------
>> >  mm/slab_common.c | 29 +++++++++++++++++++++++++++++
>> >  mm/slub.c        | 16 ----------------
>> >  3 files changed, 29 insertions(+), 39 deletions(-)
>> >
>> > diff --git a/mm/slab.c b/mm/slab.c
>> > index ddf5737c63d9..eb73d2499480 100644
>> > --- a/mm/slab.c
>> > +++ b/mm/slab.c
>> > @@ -4199,27 +4199,4 @@ void __check_heap_object(const void *ptr, unsigned long n,
>> >  }
>> >  #endif /* CONFIG_HARDENED_USERCOPY */
>> >
>> > -/**
>> > - * __ksize -- Uninstrumented ksize.
>> > - * @objp: pointer to the object
>> > - *
>> > - * Unlike ksize(), __ksize() is uninstrumented, and does not provide the same
>> > - * safety checks as ksize() with KASAN instrumentation enabled.
>> > - *
>> > - * Return: size of the actual memory used by @objp in bytes
>> > - */
>> > -size_t __ksize(const void *objp)
>> > -{
>> > -     struct kmem_cache *c;
>> > -     size_t size;
>> >
>> > -     BUG_ON(!objp);
>> > -     if (unlikely(objp == ZERO_SIZE_PTR))
>> > -             return 0;
>> > -
>> > -     c = virt_to_cache(objp);
>> > -     size = c ? c->object_size : 0;
>>
>> This comes from commit a64b53780ec3 ("mm/slab: sanity-check page type when
>> looking up cache") by Kees and virt_to_cache() is an implicit check for
>> folio slab flag ...
>>
>> > -
>> > -     return size;
>> > -}
>> > -EXPORT_SYMBOL(__ksize);
>> > diff --git a/mm/slab_common.c b/mm/slab_common.c
>> > index 23f2ab0713b7..488997db0d97 100644
>> > --- a/mm/slab_common.c
>> > +++ b/mm/slab_common.c
>> > @@ -1245,6 +1245,35 @@ void kfree_sensitive(const void *p)
>> >  }
>> >  EXPORT_SYMBOL(kfree_sensitive);
>> >
>> > +#ifndef CONFIG_SLOB
>> > +/**
>> > + * __ksize -- Uninstrumented ksize.
>> > + * @objp: pointer to the object
>> > + *
>> > + * Unlike ksize(), __ksize() is uninstrumented, and does not provide the same
>> > + * safety checks as ksize() with KASAN instrumentation enabled.
>> > + *
>> > + * Return: size of the actual memory used by @objp in bytes
>> > + */
>> > +size_t __ksize(const void *object)
>> > +{
>> > +     struct folio *folio;
>> > +
>> > +     if (unlikely(object == ZERO_SIZE_PTR))
>> > +             return 0;
>> > +
>> > +     folio = virt_to_folio(object);
>> > +
>> > +#ifdef CONFIG_SLUB
>> > +     if (unlikely(!folio_test_slab(folio)))
>> > +             return folio_size(folio);
>> > +#endif
>> > +
>> > +     return slab_ksize(folio_slab(folio)->slab_cache);
>>
>> ... and here in the common version you now for SLAB trust that the folio
>> will be a slab folio, thus undoing the intention of that commit. Maybe
>> that's not good and we should keep the folio_test_slab() for both cases?
>> Although maybe it's also strange that prior this patch, SLAB would return 0
>> if the test fails, and SLUB would return folio_size(). Probably because with
>> SLUB this can be a large kmalloc here and with SLAB not. So we could keep
>> doing that in the unified version, or KASAN devs (CC'd) could advise
>> something better?
> 
> Is this a definitive failure case?

Yeah, if we called it on a supposed object pointer that turns out to be not
slab, it usually means some UAF, so a failure.

> My opinion here is that returning 0
> from ksize() in case of failure will a) provide a way to check for
> error, and b) if the size is used unconditionally to compute an
> address may be the more graceful failure mode (see comment added in
> 0d4ca4c9bab39 for what happens if we see invalid memory per KASAN
> being accessed).

Sounds good, thanks. Then the patch should be fixed up to keep checking for
slab flag and returning 0 otherwise for CONFIG_SLAB.
For SLUB we might fail to detect the failure case by assuming it was a large
kmalloc. Maybe we could improve and only assume that when folio_size() is
large enough that the corresponding allocation would actually be done as a
large kmalloc, and the object pointer is to the beginning of the folio?




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux