Re: [PATCH linux-next v2] mm: huge_memory: fix misused mapping_large_folio_support() for anon folios

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 06.06.24 11:42, xu.xin16@xxxxxxxxxx wrote:
From: Ran Xiaokai <ran.xiaokai@xxxxxxxxxx>

When I did a large folios split test, a WARNING
"[ 5059.122759][  T166] Cannot split file folio to non-0 order"
was triggered. But the test cases are only for anonmous folios.
while mapping_large_folio_support() is only reasonable for page
cache folios.

In split_huge_page_to_list_to_order(), the folio passed to
mapping_large_folio_support() maybe anonmous folio. The
folio_test_anon() check is missing. So the split of the anonmous THP
is failed. This is also the same for shmem_mapping(). We'd better add
a check for both. But the shmem_mapping() in __split_huge_page() is
not involved, as for anonmous folios, the end parameter is set to -1, so
(head[i].index >= end) is always false. shmem_mapping() is not called.

Also add a VM_WARN_ON_ONCE() in mapping_large_folio_support()
for anon mapping, So we can detect the wrong use more easily.

THP folios maybe exist in the pagecache even the file system doesn't
support large folio, it is because when CONFIG_TRANSPARENT_HUGEPAGE
is enabled, khugepaged will try to collapse read-only file-backed pages
to THP. But the mapping does not actually support multi order
large folios properly.

Using /sys/kernel/debug/split_huge_pages to verify this, with this
patch, large anon THP is successfully split and the warning is ceased.


Smaller nits:

+	} else if (new_order) {
  		/* Split shmem folio to non-zero order not supported */
  		if (shmem_mapping(folio->mapping)) {
  			VM_WARN_ONCE(1,
  				"Cannot split shmem folio to non-0 order");
  			return -EINVAL;
  		}
-		/* No split if the file system does not support large folio */
-		if (!mapping_large_folio_support(folio->mapping)) {
+		/* No split if the file system does not support large folio.

/*
 * No ...


+		 * Note that we might still have THPs in such mappings due to
+		 * CONFIG_READ_ONLY_THP_FOR_FS. But in that case, the mapping
+		 * does not actually support large folios properly.
+		 */
+		if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) &&
+			!mapping_large_folio_support(folio->mapping)) {

if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) &&
    !mapping_large_folio_support(folio->mapping)) {

  			VM_WARN_ONCE(1,
  				"Cannot split file folio to non-0 order");
  			return -EINVAL;
  		}
  	}

+	/* Only swapping a whole PMD-mapped folio is supported */
+	if (folio_test_swapcache(folio) && new_order)
+		return -EINVAL;

  	is_hzp = is_huge_zero_folio(folio);
  	if (is_hzp) {

Acked-by: David Hildenbrand <david@xxxxxxxxxx>

--
Cheers,

David / dhildenb





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux