Re: [PATCH 2/2] mm: kfence: fix objcgs vector allocation

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, 27 Mar 2022 at 07:19, Muchun Song <songmuchun@xxxxxxxxxxxxx> wrote:
>
> If the kfence object is allocated to be used for objects vector, then
> this slot of the pool eventually being occupied permanently since
> the vector is never freed.  The solutions could be 1) freeing vector
> when the kfence object is freed or 2) allocating all vectors statically.
> Since the memory consumption of object vectors is low, it is better to
> chose 2) to fix the issue and it is also can reduce overhead of vectors
> allocating in the future.
>
> Fixes: d3fb45f370d9 ("mm, kfence: insert KFENCE hooks for SLAB")
> Signed-off-by: Muchun Song <songmuchun@xxxxxxxxxxxxx>
> ---
>  mm/kfence/core.c   | 3 +++
>  mm/kfence/kfence.h | 1 +
>  2 files changed, 4 insertions(+)

Thanks for this -- mostly looks good. Minor comments below + also
please fix what the test robot reported.

> diff --git a/mm/kfence/core.c b/mm/kfence/core.c
> index 13128fa13062..9976b3f0d097 100644
> --- a/mm/kfence/core.c
> +++ b/mm/kfence/core.c
> @@ -579,9 +579,11 @@ static bool __init kfence_init_pool(void)
>         }
>
>         for (i = 0; i < CONFIG_KFENCE_NUM_OBJECTS; i++) {
> +               struct slab *slab = virt_to_slab(addr);
>                 struct kfence_metadata *meta = &kfence_metadata[i];
>
>                 /* Initialize metadata. */
> +               slab->memcg_data = (unsigned long)&meta->objcg | MEMCG_DATA_OBJCGS;

Maybe just move it to kfence_guarded_alloc(), see "/* Set required
slab fields */", where similar initialization on slab is done.

>                 INIT_LIST_HEAD(&meta->list);
>                 raw_spin_lock_init(&meta->lock);
>                 meta->state = KFENCE_OBJECT_UNUSED;
> @@ -938,6 +940,7 @@ void __kfence_free(void *addr)
>  {
>         struct kfence_metadata *meta = addr_to_metadata((unsigned long)addr);
>
> +       KFENCE_WARN_ON(meta->objcg);

This holds true for both SLAB and SLUB, right? (I think it does, but
just double-checking.)

>         /*
>          * If the objects of the cache are SLAB_TYPESAFE_BY_RCU, defer freeing
>          * the object, as the object page may be recycled for other-typed
> diff --git a/mm/kfence/kfence.h b/mm/kfence/kfence.h
> index 2a2d5de9d379..6f0e1aece3f8 100644
> --- a/mm/kfence/kfence.h
> +++ b/mm/kfence/kfence.h
> @@ -89,6 +89,7 @@ struct kfence_metadata {
>         struct kfence_track free_track;
>         /* For updating alloc_covered on frees. */
>         u32 alloc_stack_hash;
> +       struct obj_cgroup *objcg;
>  };
>
>  extern struct kfence_metadata kfence_metadata[CONFIG_KFENCE_NUM_OBJECTS];
> --
> 2.11.0
>




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux