Re: [PATCH linux-next] mm: huge_memory: fix misused mapping_large_folio_support() for anon folios

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> On 4 Jun 2024, at 0:57, David Hildenbrand wrote:
> 
> > On 04.06.24 07:47, xu.xin16@xxxxxxxxxx wrote:
> >> From: Ran Xiaokai <ran.xiaokai@xxxxxxxxxx>
> >>
> >> When I did a large folios split test, a WARNING
> >> "[ 5059.122759][  T166] Cannot split file folio to non-0 order"
> >> was triggered. But my test cases are only for anonmous folios.
> >> while mapping_large_folio_support() is only reasonable for page
> >> cache folios.
> >
> > Agreed.
> >
> > I wonder if mapping_large_folio_support() should either
> >
> > a) Complain if used for anon folios, so we can detect the wrong use more easily. (VM_WARN_ON_ONCE())
> 
> This is much better.
> 
> >
> > b) Return "true" for anonymous mappings, although that's more debatable.
> 
> This might fix the warning here, but the function might get wrong uses easily.

yes, maybe we should rename mapping_large_folio_support() if we choose b).
 
> >
> >>
> >> In split_huge_page_to_list_to_order(), the folio passed to
> >> mapping_large_folio_support() maybe anonmous folio. The
> >> folio_test_anon() check is missing. So the split of the anonmous THP
> >> is failed. This is also the same for shmem_mapping(). We'd better add
> >> a check for both. But the shmem_mapping() in __split_huge_page() is
> >> not involved, as for anonmous folios, the end parameter is set to -1, so
> >> (head[i].index >= end) is always false. shmem_mapping() is not called.
> >>
> >> Using /sys/kernel/debug/split_huge_pages to verify this, with this
> >> patch, large anon THP is successfully split and the warning is ceased.
> >>
> >> Signed-off-by: Ran Xiaokai <ran.xiaokai@xxxxxxxxxx>
> >> Cc: xu xin <xu.xin16@xxxxxxxxxx>
> >> Cc: Yang Yang <yang.yang29@xxxxxxxxxx>
> >> ---
> >>   mm/huge_memory.c | 38 ++++++++++++++++++++------------------
> >>   1 file changed, 20 insertions(+), 18 deletions(-)
> >>
> >> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> >> index 317de2afd371..4c9c7e5ea20c 100644
> >> --- a/mm/huge_memory.c
> >> +++ b/mm/huge_memory.c
> >> @@ -3009,31 +3009,33 @@ int split_huge_page_to_list_to_order(struct page *page, struct list_head *list,
> >>   	if (new_order >= folio_order(folio))
> >>   		return -EINVAL;
> >>
> >> -	/* Cannot split anonymous THP to order-1 */
> >> -	if (new_order == 1 && folio_test_anon(folio)) {
> >> -		VM_WARN_ONCE(1, "Cannot split to order-1 folio");
> >> -		return -EINVAL;
> >> -	}
> >> -
> >>   	if (new_order) {
> >>   		/* Only swapping a whole PMD-mapped folio is supported */
> >>   		if (folio_test_swapcache(folio))
> >>   			return -EINVAL;
> >> -		/* Split shmem folio to non-zero order not supported */
> >> -		if (shmem_mapping(folio->mapping)) {
> >> -			VM_WARN_ONCE(1,
> >> -				"Cannot split shmem folio to non-0 order");
> >> -			return -EINVAL;
> >> -		}
> >> -		/* No split if the file system does not support large folio */
> >> -		if (!mapping_large_folio_support(folio->mapping)) {
> >> -			VM_WARN_ONCE(1,
> >> -				"Cannot split file folio to non-0 order");
> >> -			return -EINVAL;
> >> +
> >> +		if (folio_test_anon(folio)) {
> >> +			/* Cannot split anonymous THP to order-1 */
> >> +			if (new_order == 1) {
> >> +				VM_WARN_ONCE(1, "Cannot split to order-1 folio");
> >> +				return -EINVAL;
> >> +			}
> >> +		} else {
> >> +			/* Split shmem folio to non-zero order not supported */
> >> +			if (shmem_mapping(folio->mapping)) {
> >> +				VM_WARN_ONCE(1,
> >> +					"Cannot split shmem folio to non-0 order");
> >> +				return -EINVAL;
> >> +			}
> >> +			/* No split if the file system does not support large folio */
> >> +			if (!mapping_large_folio_support(folio->mapping)) {
> >> +				VM_WARN_ONCE(1,
> >> +					"Cannot split file folio to non-0 order");
> >> +				return -EINVAL;
> >> +			}
> >>   		}
> >>   	}
> >
> > What about the following sequence:
> >
> > if (folio_test_anon(folio)) {
> > 	if (new_order == 1)
> > 		...
> > } else if (new_order) {
> > 	if (shmem_mapping(...))
> > 		...
> > 	...
> > }
> >
> > if (folio_test_swapcache(folio) && new_order)
> > 	return -EINVAL;
> >
> > Should result in less churn and reduce indentation level.
> 
> Yeah, this looks better to me.
> 
> Best Regards,
> Yan, Zi





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux