Re: [PATCH] mm: Unsigned 'nr_pages' always larger than zero

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2019/10/18 4:42, John Hubbard wrote:
> On 10/16/19 8:19 PM, zhong jiang wrote:
>> With the help of unsigned_lesser_than_zero.cocci. Unsigned 'nr_pages"'
>> compare with zero. And __get_user_pages_locked will return an long value.
>>
>> The patch use a new local variable to store the return value of
>> __get_user_pages_locked(). Then use it to compare with zero.
> Hi Zhong,
>
> The above are actually more like notes to yourself, and while those are
> good to have, but it's not yet a perfect commit description: it talks 
> about how you got here, which is only part of the story. A higher level
> summary is better, and let the code itself cover the details. 
>
> Like this:
>
> First line (subject): 
>
>    mm/gup: allow CMA migration to propagate errors back to caller
>
> Commit description:
>
> check_and_migrate_cma_pages() was recording the result of 
> __get_user_pages_locked() in an unsigned "nr_pages" variable. Because
> __get_user_pages_locked() returns a signed value that can include
> negative errno values, this had the effect of hiding errors.
>
> Change check_and_migrate_cma_pages() implementation so that it
> uses a signed variable instead, and propagates the results back
> to the caller just as other gup internal functions do.
>
> This was discovered with the help of unsigned_lesser_than_zero.cocci.
>
>> Suggested-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
>> Signed-off-by: zhong jiang <zhongjiang@xxxxxxxxxx>
>> ---
>>  mm/gup.c | 8 +++++---
>>  1 file changed, 5 insertions(+), 3 deletions(-)
>>
>> diff --git a/mm/gup.c b/mm/gup.c
>> index 8f236a3..1fe0ceb 100644
>> --- a/mm/gup.c
>> +++ b/mm/gup.c
>> @@ -1443,6 +1443,7 @@ static long check_and_migrate_cma_pages(struct task_struct *tsk,
>>  	bool drain_allow = true;
>>  	bool migrate_allow = true;
>>  	LIST_HEAD(cma_page_list);
>> +	long ret;
> Ira pointed out that this needs initialization, see below.
I miss that.  Thanks for pointing out.
>>  
>>  check_again:
>>  	for (i = 0; i < nr_pages;) {
>> @@ -1504,17 +1505,18 @@ static long check_and_migrate_cma_pages(struct task_struct *tsk,
>>  		 * again migrating any new CMA pages which we failed to isolate
>>  		 * earlier.
>>  		 */
>> -		nr_pages = __get_user_pages_locked(tsk, mm, start, nr_pages,
>> +		ret = __get_user_pages_locked(tsk, mm, start, nr_pages,
>>  						   pages, vmas, NULL,
>>  						   gup_flags);
>>  
>> -		if ((nr_pages > 0) && migrate_allow) {
>> +		nr_pages = ret;
> Although technically correct, it makes me feel odd to see the assignment
> done from signed to unsigned, *before* checking for >0. And Ira is hinting
> at the same thing, when he asks if we can return early here. See below...
At first.  I realize the same thing, But assign the value from signed to unsigned
before checking for >0 doesn't matter when we can return early here. Because we
only return the ret.  Of course, The assignment is meaningless when we can return early.

It indeed is better to do as you suggested. :-(
>> +		if ((ret > 0) && migrate_allow) {
>>  			drain_allow = true;
>>  			goto check_again;
>>  		}
>>  	}
>>  
>> -	return nr_pages;
>> +	return ret;
>>  }
>>  #else
>>  static long check_and_migrate_cma_pages(struct task_struct *tsk,
>>
> So, overall, I'd recommend this, instead:
>
> diff --git a/mm/gup.c b/mm/gup.c
> index 23a9f9c9d377..72bc027037fa 100644
> --- a/mm/gup.c
> +++ b/mm/gup.c
> @@ -1443,6 +1443,7 @@ static long check_and_migrate_cma_pages(struct task_struct *tsk,
>         bool drain_allow = true;
>         bool migrate_allow = true;
>         LIST_HEAD(cma_page_list);
> +       long ret = nr_pages;
>  
>  check_again:
>         for (i = 0; i < nr_pages;) {
> @@ -1504,17 +1505,18 @@ static long check_and_migrate_cma_pages(struct task_struct *tsk,
>                  * again migrating any new CMA pages which we failed to isolate
>                  * earlier.
>                  */
> -               nr_pages = __get_user_pages_locked(tsk, mm, start, nr_pages,
> +               ret = __get_user_pages_locked(tsk, mm, start, nr_pages,
>                                                    pages, vmas, NULL,
>                                                    gup_flags);
>  
> -               if ((nr_pages > 0) && migrate_allow) {
> +               if ((ret > 0) && migrate_allow) {
> +                       nr_pages = ret;
>                         drain_allow = true;
>                         goto check_again;
>                 }
>         }
>  
> -       return nr_pages;
> +       return ret;
>  }
>  #else
>  static long check_and_migrate_cma_pages(struct task_struct *tsk,
>
>
> thanks,
>
> John Hubbard
> NVIDIA
>
> .
>
Thanks for your suggestion. I will repost it as you has proposed.

Sincerely,
zhong jiang





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux