Re: [PATCH 1/6] mm: kfence: simplify kfence pool initialization

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




> On Mar 28, 2023, at 20:05, Marco Elver <elver@xxxxxxxxxx> wrote:
> 
> On Tue, 28 Mar 2023 at 13:55, Marco Elver <elver@xxxxxxxxxx> wrote:
>> 
>> On Tue, 28 Mar 2023 at 11:58, Muchun Song <songmuchun@xxxxxxxxxxxxx> wrote:
>>> 
>>> There are three similar loops to initialize kfence pool, we could merge
>>> all of them into one loop to simplify the code and make code more
>>> efficient.
>>> 
>>> Signed-off-by: Muchun Song <songmuchun@xxxxxxxxxxxxx>
>> 
>> Reviewed-by: Marco Elver <elver@xxxxxxxxxx>
>> 
>>> ---
>>> mm/kfence/core.c | 47 ++++++-----------------------------------------
>>> 1 file changed, 6 insertions(+), 41 deletions(-)
>>> 
>>> diff --git a/mm/kfence/core.c b/mm/kfence/core.c
>>> index 7d01a2c76e80..de62a84d4830 100644
>>> --- a/mm/kfence/core.c
>>> +++ b/mm/kfence/core.c
>>> @@ -539,35 +539,10 @@ static void rcu_guarded_free(struct rcu_head *h)
>>> static unsigned long kfence_init_pool(void)
>>> {
>>>        unsigned long addr = (unsigned long)__kfence_pool;
>>> -       struct page *pages;
>>>        int i;
>>> 
>>>        if (!arch_kfence_init_pool())
>>>                return addr;
>>> -
>>> -       pages = virt_to_page(__kfence_pool);
>>> -
>>> -       /*
>>> -        * Set up object pages: they must have PG_slab set, to avoid freeing
>>> -        * these as real pages.
>>> -        *
>>> -        * We also want to avoid inserting kfence_free() in the kfree()
>>> -        * fast-path in SLUB, and therefore need to ensure kfree() correctly
>>> -        * enters __slab_free() slow-path.
>>> -        */
> 
> Actually: can you retain this comment somewhere?

Sure, I'll move this to right place.

Thanks.






[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux