Re: [PATCH] hugetlb: fix locking in region_add,region_cgh,allocate_file_region_entries

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 23.10.20 13:25, Laurent Cremmer wrote:
>>
>> On 23.10.20 09:47, Laurent Cremmer wrote:
>>> commit 0db9d74ed884  ("hugetlb: disable region_add file_region coalescing")
>>> introduced issues with regards to locking:
>>>
>>> - Missing spin_unlock in hugetlb.c:region_add and hugetlb.c:region_cgh
>>>   when returning after an error.
>>> - Missing spin lock  in hugetlb.c:allocate_file_region_entries
>>>   when returning after an error.
>>>
>>> The first two errors were spotted using the coccinelle static code
>>> analysis tool while focusing on the mini_lock.cocci script,
>>> whose goal is to find missing unlocks.
>>>
>>> make coccicheck mode=REPORT m=mm/hugetlb.c
>>>
>>>     mm/hugetlb.c:514:3-9: preceding lock on line 487
>>>     mm/hugetlb.c:564:2-8: preceding lock on line 554
>>>
>>> The third instance spotted by manual examination.
>>>
>>> In static long region_add(...) and static long region_cgh(...) , releasing
>>> the acquired lock when exiting via their error path was removed.
>>> This will cause these functions to return with a lock held if they do not
>>> succeed.
>>>
>>> This patch reintroduces the original error path making sure the lock is
>>> properly released on all exits.
>>>
>>> A a new function allocate_file_region_entries was also introduced  that
>>> must be called with a lock held and returned with the lock held.
>>> However the lock is temporarily released during processing but will not
>>> be reacquired on error.
>>>
>>> This patch ensures that the lock will be reacquired in the error path also.
>>>
>>> Fixes: 0db9d74ed884  ("hugetlb: disable region_add file_region coalescing")
>>> Link: https://lists.elisa.tech/g/development-process/message/289
>>> Signed-off-by: Laurent Cremmer <laurent@xxxxxxxxxxxxxxxxxx>
>>> Reviewed-by: Oliver Hartkopp <hartkopp@xxxxxxxxxxxxxxxxxx>
>>> ---
>>>  mm/hugetlb.c | 17 +++++++++++------
>>>  1 file changed, 11 insertions(+), 6 deletions(-)
>>>
>>> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
>>> index fe76f8fd5a73..92bea6f77361 100644
>>> --- a/mm/hugetlb.c
>>> +++ b/mm/hugetlb.c
>>> @@ -438,7 +438,7 @@ static int allocate_file_region_entries(struct resv_map *resv,
>>>               for (i = 0; i < to_allocate; i++) {
>>>                       trg = kmalloc(sizeof(*trg), GFP_KERNEL);
>>
>> So we're allocating memory with GFP_KERNEL while holding a spinlock?
>>
>> If my memory isn't completely wrong, that's broken, no? this would
>> require GFP_ATOMIC?
>>
>> But I might be messing up things :)
>>
> 
> Ah, the beauty of patches sometimes not telling the whole story :-)
> The lock is released in line 437, prior attempting to allocate with
> GFP_KERNEL, so  GFP_ATOMIC is not required in this context.

Ah, I actually misread line 437, sorry :)

[...]

>>>
>>> @@ -450,7 +450,8 @@ static int allocate_file_region_entries(struct resv_map *resv,
>>>
>>>       return 0;
>>>
>>> -out_of_memory:
>>> +out_of_memory_unlocked:
> 
> However, in light of your remarks, I would question re-acquiring the
> lock before cleaning up using kfree and would defer it to after the
> clean up as ended. Agree?
> 
>>> +     spin_lock(&resv->lock);
>>>       list_for_each_entry_safe(rg, trg, &allocated_regions, link) {
>>>               list_del(&rg->link);
>>>               kfree(rg);
>>> @@ -508,7 +509,8 @@ static long region_add(struct resv_map *resv, long f, long t,
>>>
>>>               if (allocate_file_region_entries(
>>>                           resv, actual_regions_needed - in_regions_needed)) {
>>> -                     return -ENOMEM;
>>> +                     add = -ENOMEM;
>>
>> dito, does this handle atomic context?
>>
> 
> IMHO it does, since allocate_file_region_entries will drop the lock
> when attempting to allocate entries and reacquires it on exit.

Right, that's the source of my confusion.

> 
>>
>>> +                     goto out_locked;
>>>               }
>>>
>>>               goto retry;
>>> @@ -517,7 +519,7 @@ static long region_add(struct resv_map *resv, long f, long t,
>>>       add = add_reservation_in_range(resv, f, t, h_cg, h, NULL);
>>>
>>>       resv->adds_in_progress -= in_regions_needed;
>>> -
>>> +out_locked:
>>>       spin_unlock(&resv->lock);
>>>       VM_BUG_ON(add < 0);
>>>       return add;
>>> @@ -557,11 +559,14 @@ static long region_chg(struct resv_map *resv, long f, long t,
>>>       if (*out_regions_needed == 0)
>>>               *out_regions_needed = 1;
>>>
>>> -     if (allocate_file_region_entries(resv, *out_regions_needed))
>>> -             return -ENOMEM;
>>> +     if (allocate_file_region_entries(resv, *out_regions_needed)) {
>>> +             chg = -ENOMEM;
>>> +             goto out_locked;
>>> +     }
>>
>> dito
>>
> 
> See above, allocate_file_region_entries will return the lock held.
> 
> Now, in defence of the original implementation what "looks" like a bug
> at first sight might be an attempt to "optimize-out" a pair of
> lock/unlock operations on the error path of region_add, and region_chg
> by relying on the implicit knowledegfe that
> allocate_file_region_entries would return with the lock not being held
> on error.
> Unfortunately it hinders the readability (and IMHO the
> maintainability) of the code.

Yeah, this use of locking is just prone for BUGs. Horrible if you ask me.

Temporarily dropping locks is one source of confusions and bugs.
Inconsistently returning with either locks held or not is another
horrible thing to do.

I'll have to stare at the code another couple of minutes to figure out
what's actually right or wrong.


-- 
Thanks,

David / dhildenb





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux