Re: [PATCH v2 1/2] mm: use aligned address in clear_gigantic_page()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Kefeng Wang <wangkefeng.wang@xxxxxxxxxx> writes:

> On 2024/10/28 14:17, Huang, Ying wrote:
>> Kefeng Wang <wangkefeng.wang@xxxxxxxxxx> writes:
>> 
>>> When clearing gigantic page, it zeros page from the first page to the
>>> last page, if directly passing addr_hint which maybe not the address
>>> of the first page of folio, then some archs could flush the wrong cache
>>> if it does use the addr_hint as a hint. For non-gigantic page, it
>>> calculates the base address inside, even passed the wrong addr_hint, it
>>> only has performance impact as the process_huge_page() wants to process
>>> target page last to keep its cache lines hot), no functional impact.
>>>
>>> Let's pass the real accessed address to folio_zero_user() and use the
>>> aligned address in clear_gigantic_page() to fix it.
>>>
>>> Fixes: 78fefd04c123 ("mm: memory: convert clear_huge_page() to folio_zero_user()")
>>> Signed-off-by: Kefeng Wang <wangkefeng.wang@xxxxxxxxxx>
>>> ---
>>> v2:
>>> - update changelog to clarify the impact, per Andrew
>>>
>>>   fs/hugetlbfs/inode.c | 2 +-
>>>   mm/memory.c          | 1 +
>>>   2 files changed, 2 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
>>> index a4441fb77f7c..a5ea006f403e 100644
>>> --- a/fs/hugetlbfs/inode.c
>>> +++ b/fs/hugetlbfs/inode.c
>>> @@ -825,7 +825,7 @@ static long hugetlbfs_fallocate(struct file *file, int mode, loff_t offset,
>>>   			error = PTR_ERR(folio);
>>>   			goto out;
>>>   		}
>>> -		folio_zero_user(folio, ALIGN_DOWN(addr, hpage_size));
>>> +		folio_zero_user(folio, addr);
>> 'addr' is set with the following statement above,
>> 		/* addr is the offset within the file (zero based) */
>> 		addr = index * hpage_size;
>> So, we just don't need to ALIGN_DOWN() here.  Or do I miss
>> something?
>
> Yes, it is already aligned,
>> 
>>>   		__folio_mark_uptodate(folio);
>>>   		error = hugetlb_add_to_page_cache(folio, mapping, index);
>>>   		if (unlikely(error)) {
>>> diff --git a/mm/memory.c b/mm/memory.c
>>> index 75c2dfd04f72..ef47b7ea5ddd 100644
>>> --- a/mm/memory.c
>>> +++ b/mm/memory.c
>>> @@ -6821,6 +6821,7 @@ static void clear_gigantic_page(struct folio *folio, unsigned long addr,
>>>   	int i;
>>>     	might_sleep();
>>> +	addr = ALIGN_DOWN(addr, folio_size(folio));
>
> but for hugetlb_no_page(),  we do need to align the addr as it use
> vmf->real_address, so I move the alignment into the
> clear_gigantic_page.

That sounds good.  You may need to revise patch description to describe
why you make the change.  May be something like below?

In current kernel, hugetlb_no_page() calls folio_zero_user() with the
fault address.  Where the fault address may be not aligned with the huge
page size.  Then, folio_zero_user() may call clear_gigantic_page() with
the address, while clear_gigantic_page() requires the address to be huge
page size aligned.  So, this may cause memory corruption or information
leak.

>>>   	for (i = 0; i < nr_pages; i++) {
>>>   		cond_resched();
>>>   		clear_user_highpage(folio_page(folio, i), addr + i * PAGE_SIZE);

--
Best Regards,
Huang, Ying




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux